VitalContribution commited on
Commit
8892948
·
verified ·
1 Parent(s): 7501533

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -11,7 +11,7 @@ tags: []
11
  - **Output:** Joke or No-joke sentiment
12
 
13
  ## Training Data
14
- - **Dataset:** 200k Short Texts for Humor Detection:
15
  - **Link:** https://www.kaggle.com/datasets/deepcontractor/200k-short-texts-for-humor-detection
16
  - **Size:** 200,000 labeled short texts
17
  - **Distribution:** Equally balanced between humor and non-humor
@@ -29,7 +29,6 @@ DistilBERT base model (uncased), a distilled version of BERT optimized for effic
29
  | Batch Size | 32 (per device) |
30
  | Learning Rate | 2e-4 |
31
  | Weight Decay | 0.01 |
32
- | Max Steps | Total training steps |
33
  | Epochs | 2 |
34
  | Warmup Steps | 100 |
35
  | Best Model Selection | Based on eval_loss |
 
11
  - **Output:** Joke or No-joke sentiment
12
 
13
  ## Training Data
14
+ - **Dataset:** 200k Short Texts for Humor Detection
15
  - **Link:** https://www.kaggle.com/datasets/deepcontractor/200k-short-texts-for-humor-detection
16
  - **Size:** 200,000 labeled short texts
17
  - **Distribution:** Equally balanced between humor and non-humor
 
29
  | Batch Size | 32 (per device) |
30
  | Learning Rate | 2e-4 |
31
  | Weight Decay | 0.01 |
 
32
  | Epochs | 2 |
33
  | Warmup Steps | 100 |
34
  | Best Model Selection | Based on eval_loss |