syedkhalid076 commited on
Commit
7905a16
1 Parent(s): c1b236a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +132 -1
README.md CHANGED
@@ -7,4 +7,135 @@ metrics:
7
  - accuracy: 0.91789
8
  ---
9
 
10
- The model outputs 1 & 0. Where, 1 is "Positive" & 2 in "Negative".
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  - accuracy: 0.91789
8
  ---
9
 
10
+
11
+ # Fine-Tuned RoBERTa Model for Sentiment Analysis
12
+
13
+ ## Overview
14
+
15
+ This is a fine-tuned [RoBERTa](https://huggingface.co/docs/transformers/model_doc/robertal) model for sentiment analysis, trained on the [SST-2 dataset](https://huggingface.co/datasets/stanfordnlp/sst2). It classifies text into two sentiment categories:
16
+ - **0**: Negative
17
+ - **1**: Positive
18
+
19
+ The model achieves an accuracy of **91.789%** on the SST-2 test set, making it a robust choice for sentiment classification tasks.
20
+
21
+ ---
22
+
23
+ ## Model Details
24
+
25
+ - **Model architecture**: RoBERTa
26
+ - **Dataset**: `stanfordnlp/sst2`
27
+ - **Language**: English
28
+ - **Model size**: 125 million parameters
29
+ - **Precision**: FP32
30
+ - **File format**: [SafeTensor](https://github.com/huggingface/safetensors)
31
+ - **Tags**: Text Classification, Transformers, SafeTensors, SST-2, English, RoBERTa, Inference Endpoints
32
+
33
+ ---
34
+
35
+ ## Usage
36
+
37
+ ### Installation
38
+
39
+ Ensure you have the necessary libraries installed:
40
+
41
+ ```bash
42
+ pip install transformers torch safetensors
43
+ ```
44
+
45
+ ### Loading the Model
46
+
47
+ The model can be loaded from Hugging Face's `transformers` library as follows:
48
+
49
+ ```python
50
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
51
+
52
+ # Load the tokenizer and model
53
+ model_name = "syedkhalid076/RoBERTa-Sentimental-Analysis-Model"
54
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
55
+ model = AutoModelForSequenceClassification.from_pretrained(model_name)
56
+
57
+ # Example text
58
+ text = "This is an amazing product!"
59
+
60
+ # Tokenize input
61
+ inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
62
+
63
+ # Perform inference
64
+ outputs = model(**inputs)
65
+ logits = outputs.logits
66
+ predicted_class = logits.argmax().item()
67
+
68
+ # Map the prediction to sentiment
69
+ sentiments = {0: "Negative", 1: "Positive"}
70
+ print(f"Sentiment: {sentiments[predicted_class]}")
71
+ ```
72
+
73
+ ---
74
+
75
+ ## Performance
76
+
77
+ ### Dataset
78
+
79
+ The model was trained and evaluated on the **SST-2** dataset, which is widely used for sentiment analysis tasks.
80
+
81
+ ### Metrics
82
+
83
+ | Metric | Value |
84
+ |----------|----------|
85
+ | Accuracy | 91.789% |
86
+
87
+ ---
88
+
89
+ ## Deployment
90
+
91
+ The model is hosted on Hugging Face and can be used directly via their [Inference Endpoints](https://huggingface.co/inference-endpoints).
92
+
93
+ ---
94
+
95
+ ## Applications
96
+
97
+ This model can be used in a variety of applications, such as:
98
+ - Customer feedback analysis
99
+ - Social media sentiment monitoring
100
+ - Product review classification
101
+ - Opinion mining for research purposes
102
+
103
+ ---
104
+
105
+ ## Limitations
106
+
107
+ While the model performs well on the SST-2 dataset, consider these limitations:
108
+ 1. It may not generalize well to domains with language or sentiment nuances different from the training data.
109
+ 2. It supports only binary sentiment classification (positive/negative).
110
+
111
+ For fine-tuning on custom datasets or additional labels, refer to the [Hugging Face documentation](https://huggingface.co/docs/transformers/training).
112
+
113
+ ---
114
+
115
+ ## Model Card
116
+
117
+ | **Feature** | **Details** |
118
+ |---------------------|-----------------------------------------------------------------------------|
119
+ | **Language** | English |
120
+ | **Model size** | 125M parameters |
121
+ | **File format** | SafeTensor |
122
+ | **Precision** | FP32 |
123
+ | **Dataset** | stanfordnlp/sst2 |
124
+ | **Accuracy** | 91.789% |
125
+
126
+ ---
127
+
128
+ ## Contributing
129
+
130
+ Contributions to improve the model or extend its capabilities are welcome. Fork this repository, make your changes, and submit a pull request.
131
+
132
+ ---
133
+
134
+ ## Acknowledgments
135
+
136
+ - The [Hugging Face Transformers library](https://github.com/huggingface/transformers) for model implementation and fine-tuning utilities.
137
+ - The [Stanford Sentiment Treebank 2 (SST-2)](https://huggingface.co/datasets/stanfordnlp/sst2) dataset for providing high-quality sentiment analysis data.
138
+
139
+ ---
140
+
141
+ **Author**: Syed Khalid Hussain