ryantwolf commited on
Commit
c07a8e4
1 Parent(s): 0cf2c0c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -3
README.md CHANGED
@@ -17,9 +17,6 @@ The model architecture is Deberta V3 Base
17
  Context length is 1024 tokens
18
  # Training (details)
19
  ## Training data:
20
- - 1 million Common Crawl samples, labeled using Google Cloud’s Natural Language API: https://cloud.google.com/natural-language/docs/classifying-text
21
- - 500k Wikepedia articles, curated using Wikipedia-API: https://pypi.org/project/Wikipedia-API/
22
- ## Training steps:
23
  The training set is 22828 Common Crawl text samples, labeled as "High", "Medium", "Low". Here are some examples:
24
  1. Input:
25
  ```
 
17
  Context length is 1024 tokens
18
  # Training (details)
19
  ## Training data:
 
 
 
20
  The training set is 22828 Common Crawl text samples, labeled as "High", "Medium", "Low". Here are some examples:
21
  1. Input:
22
  ```