qwdf8591 commited on
Commit
c5ec169
1 Parent(s): 95d0996

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -17
README.md CHANGED
@@ -48,17 +48,13 @@ print(result)
48
 
49
  ## Model description
50
 
51
- This model, based on the RoBERTa architecture (roberta-base), is fine-tuned for a sentiment classification task specific to the finance sector. It is designed to
52
- classify auditor reports into three sentiment categories: "negative", "neutral", and "positive". This capability can be crucial for financial analysis,
53
- investment decision-making, and trend analysis in financial reports.
54
 
55
  ## Intended uses & limitations
56
 
57
  ### Intended Uses
58
 
59
- This model is intended for professionals and researchers working in the finance industry who require an automated tool to assess the sentiment conveyed in textual
60
- data, specifically auditor reports. It can be integrated into financial analysis systems to provide quick insights into the sentiment trends, which can aid in
61
- decision-making processes.
62
 
63
  ### Limitations
64
 
@@ -70,29 +66,23 @@ decision-making processes.
70
 
71
  ### Training Data
72
 
73
- The model was trained on a proprietary dataset FinanceInc/auditor_sentiment sourced from Hugging Face datasets, which consists of labeled examples of auditor reports.
74
- Each report is annotated with one of three sentiment labels: negative, neutral, and positive.
75
 
76
  ### Evaluation Data
77
 
78
- The evaluation was conducted using a split of the same dataset. The data was divided into training and validation sets with a sharding method to ensure a diverse
79
- representation of samples in each set.
80
 
81
  ## Training Procedure
82
 
83
- The model was fine-tuned for 5 epochs with a batch size of 8 for both training and evaluation. An initial learning rate of 5e-5 was used with a warm-up step of 500
84
- to prevent overfitting at the early stages of training. The best model was selected based on its performance on the validation set, and only the top two performing
85
- models were saved to conserve disk space.
86
 
87
  ## Evaluation Metrics
88
 
89
- Evaluation metrics included accuracy, macro precision, macro recall, and macro F1-score, calculated after each epoch. These metrics helped monitor the model's
90
- performance and ensure it generalized well beyond the training data.
91
 
92
  ## Model Performance
93
 
94
- The final model's performance on the test set will be reported in terms of accuracy, precision, recall, and F1-score to provide a comprehensive overview
95
- of its predictive capabilities.
96
 
97
  ## Model Status
98
 
 
48
 
49
  ## Model description
50
 
51
+ This model, based on the RoBERTa architecture (roberta-base), is fine-tuned for a sentiment classification task specific to the finance sector. It is designed to classify auditor reports into three sentiment categories: "negative", "neutral", and "positive". This capability can be crucial for financial analysis, investment decision-making, and trend analysis in financial reports.
 
 
52
 
53
  ## Intended uses & limitations
54
 
55
  ### Intended Uses
56
 
57
+ This model is intended for professionals and researchers working in the finance industry who require an automated tool to assess the sentiment conveyed in textual data, specifically auditor reports. It can be integrated into financial analysis systems to provide quick insights into the sentiment trends, which can aid in decision-making processes.
 
 
58
 
59
  ### Limitations
60
 
 
66
 
67
  ### Training Data
68
 
69
+ The model was trained on a proprietary dataset FinanceInc/auditor_sentiment sourced from Hugging Face datasets, which consists of labeled examples of auditor reports. Each report is annotated with one of three sentiment labels: negative, neutral, and positive.
 
70
 
71
  ### Evaluation Data
72
 
73
+ The evaluation was conducted using a split of the same dataset. The data was divided into training and validation sets with a sharding method to ensure a diverse representation of samples in each set.
 
74
 
75
  ## Training Procedure
76
 
77
+ The model was fine-tuned for 5 epochs with a batch size of 8 for both training and evaluation. An initial learning rate of 5e-5 was used with a warm-up step of 500 to prevent overfitting at the early stages of training. The best model was selected based on its performance on the validation set, and only the top two performing models were saved to conserve disk space.
 
 
78
 
79
  ## Evaluation Metrics
80
 
81
+ Evaluation metrics included accuracy, macro precision, macro recall, and macro F1-score, calculated after each epoch. These metrics helped monitor the model's performance and ensure it generalized well beyond the training data.
 
82
 
83
  ## Model Performance
84
 
85
+ The final model's performance on the test set will be reported in terms of accuracy, precision, recall, and F1-score to provide a comprehensive overview of its predictive capabilities.
 
86
 
87
  ## Model Status
88