Shahriar commited on
Commit
f558124
·
verified ·
1 Parent(s): c88dc48

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -57
README.md CHANGED
@@ -11,92 +11,98 @@ base_model: FacebookAI/roberta-base
11
  pipeline_tag: text-classification
12
  ---
13
 
14
- # WebSector-Flexible Model Card
15
 
16
- ## Model Overview
17
 
18
- The WebSector-Flexible model is a transformer-based multi-sector website classification model, fine-tuned using the RoBERTa architecture with LoRA and trained on the WebSector dataset, consisting of 109,476 websites in the training set. The flexible mode is designed to maximize recall by identifying both primary and secondary sectors of websites, making it suitable for applications that require broad coverage across multiple sectors. This mode is ideal for exploratory tasks or when it's critical to capture all possible sector associations.
19
 
20
- ## Model Details
21
 
22
- - **Model type**: Transformer-based (RoBERTa + LoRA)
23
- - **Training dataset**: WebSector Corpus (Training set: 109,476 websites)
24
- - **Prediction modes**: Flexible mode
25
- - **Task**: Multi-sector website classification
26
- - **Architecture**: RoBERTa transformer fine-tuned with LexRank summarization for handling lengthy content
27
- - **Special Technique**: Single Positive Label (SPL) paradigm for multi-label classification with WAN loss
28
-
29
- ## Intended Uses & Limitations
30
-
31
- ### Use Cases
32
  - **Website categorization**: Classifies websites into multiple sectors for general exploration or broader categorization tasks.
33
  - **Research**: Suitable for research on multi-sector classification or multi-label classification tasks where label dependencies are important.
34
  - **Content Management**: Can be used in platforms where it's important to categorize content across multiple industries or sectors.
35
 
36
- ### Limitations
37
- - **Single Positive Label**: Trained with only the primary sector observable, potentially limiting its accuracy in predicting secondary sectors.
38
  - **Flexible mode**: Focuses on recall, which may lead to over-predicting some sectors in websites with ambiguous content.
39
- - **Data Imbalance**: Some sectors are underrepresented in the dataset, which may affect model performance on certain sectors.
40
 
41
- ## Dataset
 
 
42
 
43
- - **Dataset name**: WebSector Corpus
44
- - **Training set size**: 109,476 websites
45
- - **Sectors**:
46
- 1. Finance, Marketing & HR
47
- 2. Information Technology & Electronics
48
- 3. Consumer & Supply Chain
49
- 4. Civil, Mechanical & Electrical
50
- 5. Medical
51
- 6. Sports, Media & Entertainment
52
- 7. Education
53
- 8. Government, Defense & Legal
54
- 9. Travel, Food & Hospitality
55
- 10. Non-Profit
56
- - **Labeling**: Each website is labeled with its primary sector, derived from self-declared industry categories.
57
 
58
- ## Evaluation Metrics
 
 
 
59
 
60
- - **Top-1 Recall**: Measures the model's ability to correctly identify the primary sector as the most likely predicted sector.
61
- - **Top-3 Recall**: Evaluates the model's capacity to have the true sector within the top three predicted labels.
62
- - **Recall**: Assesses the model's ability to predict all relevant sectors, not just the primary one.
63
 
64
- The flexible mode maximizes recall, making it ideal for capturing as many relevant sectors as possible, though it may compromise precision.
65
 
66
- ## Training Process
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67
 
68
  ### Hyperparameters:
69
- - Number of epochs: 7
70
- - Batch size: 8
71
- - Learning rate: 5×10^-6
72
- - Weight decay: 0.1
73
- - LoRA rank: 128
74
- - LoRA alpha: 512
75
- - Dropout rate: 0.1
76
 
77
  ### Training Setup:
78
- - **Hardware**: Four GPUs, including two NVIDIA RTX A5000 and two NVIDIA TITAN RTX units for parallel processing.
79
- - **Software**: PyTorch framework and Hugging Face Transformers library.
80
- - **Strategy**: Distributed training across four GPUs, with model selection based on the lowest validation loss.
81
 
82
- ## Model Performance
83
 
84
- - Top-1 Recall: 68%
85
- - Top-3 Recall: 85%
86
- - Recall: 86%
87
- - Precision: 68%
88
 
89
- These metrics show that the flexible mode of the WebSector model is optimized for recall, allowing it to capture multiple relevant sectors while maintaining a solid precision score.
 
 
 
 
 
90
 
91
  ## Ethical Considerations
92
 
93
- - **Privacy Enforcement**: This model can assist in classifying websites into sectors relevant to privacy regulations like CCPA or HIPAA.
94
  - **Bias**: As the model was trained on self-declared sector labels, there is potential for bias due to inaccurate or incomplete labeling.
95
 
96
  ## Citation
97
 
98
  If you use this model in your research, please cite the following paper:
99
 
100
- ```
101
- Shahriar Shayesteh, Mukund Srinath, Lee Matheson, Florian Schaub, C. Lee Giles, and Shomir Wilson. "WebSector: A New Insight into Multi-Sector Website Classification Using Single Positive Labels". Conference acronym 'XX, June 03–05, 2018, Woodstock, NY.
 
 
 
 
 
102
  ```
 
11
  pipeline_tag: text-classification
12
  ---
13
 
14
+ # WebSector-Flexible
15
 
16
+ ## Model description
17
 
18
+ The **WebSector-Flexible** model is a RoBERTa-based transformer designed for high-recall website classification into one of ten broad sectors. It is part of the WebSector framework, which introduces a Single Positive Label (SPL) paradigm for multi-label classification using only the primary sector of websites. The flexible mode of this model focuses on maximizing recall by identifying both primary and secondary sectors, making it ideal for exploratory tasks or when it's critical to capture all possible sector associations.
19
 
20
+ ## Intended uses & limitations
21
 
22
+ ### Intended uses:
 
 
 
 
 
 
 
 
 
23
  - **Website categorization**: Classifies websites into multiple sectors for general exploration or broader categorization tasks.
24
  - **Research**: Suitable for research on multi-sector classification or multi-label classification tasks where label dependencies are important.
25
  - **Content Management**: Can be used in platforms where it's important to categorize content across multiple industries or sectors.
26
 
27
+ ### Limitations:
28
+ - **Single Positive Label**: Only primary sector labels are observable during training, which might limit performance when predicting secondary sectors.
29
  - **Flexible mode**: Focuses on recall, which may lead to over-predicting some sectors in websites with ambiguous content.
30
+ - **Dataset imbalance**: Some sectors are underrepresented, which may affect performance in predicting those categories.
31
 
32
+ ## How to use
33
+
34
+ To use this model with Hugging Face's transformers library:
35
 
36
+ ```python
37
+ from transformers import pipeline
 
 
 
 
 
 
 
 
 
 
 
 
38
 
39
+ classifier = pipeline("text-classification", model="Shahriar/WebSector-Flexible")
40
+ result = classifier("Your website content\URL here")
41
+ print(result)
42
+ ```
43
 
44
+ This will return the predicted sectors of the website based on its content.
 
 
45
 
46
+ ## Dataset
47
 
48
+ The model was trained on the **WebSector Corpus**, which consists of 254,702 websites categorized into 10 broad sectors. The training set contains 109,476 websites. The dataset is split as follows:
49
+ - **Training set**: 109,476 websites
50
+ - **Validation set**: 27,370 websites
51
+ - **Test set**: 58,649 websites
52
+
53
+ The 10 sectors used for classification are:
54
+ - Finance, Marketing & HR
55
+ - Information Technology & Electronics
56
+ - Consumer & Supply Chain
57
+ - Civil, Mechanical & Electrical
58
+ - Medical
59
+ - Sports, Media & Entertainment
60
+ - Education
61
+ - Government, Defense & Legal
62
+ - Travel, Food & Hospitality
63
+ - Non-Profit
64
+
65
+ ## Training Procedure
66
 
67
  ### Hyperparameters:
68
+ - **Number of epochs**: 7
69
+ - **Batch size**: 8
70
+ - **Learning rate**: $5 \times 10^{-6}$
71
+ - **Weight decay**: 0.1
72
+ - **LoRA rank**: 128
73
+ - **LoRA alpha**: 512
74
+ - **Dropout rate**: 0.1
75
 
76
  ### Training Setup:
77
+ - **Hardware**: Four GPUs, including two NVIDIA RTX A5000 and two NVIDIA TITAN RTX units, were used for distributed training.
78
+ - **Software**: The model was trained using the PyTorch framework, with the Hugging Face Transformers library for implementing transformer-based models.
79
+ - **Strategy**: Distributed training was employed, and models were selected based on the lowest validation loss.
80
 
81
+ ## Evaluation
82
 
83
+ The model was evaluated on the **WebSector Corpus** using metrics appropriate for multi-label classification:
 
 
 
84
 
85
+ - **Top-1 Recall**: 68%
86
+ - **Top-3 Recall**: 85%
87
+ - **Recall**: 86%
88
+ - **Precision**: 68%
89
+
90
+ These metrics show that the flexible mode maximizes recall, allowing it to capture multiple relevant sectors while maintaining a solid precision score.
91
 
92
  ## Ethical Considerations
93
 
94
+ - **Privacy Enforcement**: The model can assist in classifying websites into sectors relevant to privacy regulations like CCPA or HIPAA.
95
  - **Bias**: As the model was trained on self-declared sector labels, there is potential for bias due to inaccurate or incomplete labeling.
96
 
97
  ## Citation
98
 
99
  If you use this model in your research, please cite the following paper:
100
 
101
+ ```bibtex
102
+ @article{?,
103
+ title={WebSector: A New Insight into Multi-Sector Website Classification Using Single Positive Labels},
104
+ author={Shayesteh, Shahriar and Srinath, Mukund and Matheson, Lee and Schaub, Florian and Giles, C. Lee and Wilson, Shomir},
105
+ journal={?},
106
+ year={?},
107
+ }
108
  ```