Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,101 @@
|
|
|
|
|
|
1 |
---
|
|
|
|
|
2 |
library_name: transformers
|
3 |
tags: []
|
4 |
-
|
5 |
|
6 |
# Model Card for Model ID
|
7 |
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
|
12 |
## Model Details
|
13 |
|
14 |
### Model Description
|
15 |
|
16 |
-
|
17 |
|
18 |
-
|
19 |
-
|
20 |
-
- **
|
21 |
-
- **
|
22 |
-
- **
|
23 |
-
- **
|
24 |
-
- **
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
-
|
28 |
-
### Model Sources [optional]
|
29 |
-
|
30 |
-
<!-- Provide the basic links for the model. -->
|
31 |
-
|
32 |
-
- **Repository:** [More Information Needed]
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
## Uses
|
37 |
|
38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
-
|
40 |
### Direct Use
|
41 |
|
42 |
-
|
43 |
|
44 |
-
|
45 |
|
46 |
-
|
47 |
-
|
48 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
49 |
-
|
50 |
-
[More Information Needed]
|
51 |
|
52 |
### Out-of-Scope Use
|
53 |
|
54 |
-
|
55 |
-
|
56 |
-
|
|
|
57 |
|
58 |
## Bias, Risks, and Limitations
|
59 |
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
|
64 |
### Recommendations
|
65 |
|
66 |
-
|
67 |
-
|
68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
69 |
|
70 |
## How to Get Started with the Model
|
71 |
|
72 |
Use the code below to get started with the model.
|
73 |
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
#### Testing Data
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
|
119 |
-
[More Information Needed]
|
120 |
-
|
121 |
-
#### Metrics
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
|
127 |
### Results
|
128 |
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
-
|
197 |
-
## Model Card Contact
|
198 |
-
|
199 |
-
[More Information Needed]
|
|
|
1 |
+
Here's a filled version of the model card for Behpouyan Co with placeholders where specific information is missing:
|
2 |
+
|
3 |
---
|
4 |
+
|
5 |
+
```yaml
|
6 |
library_name: transformers
|
7 |
tags: []
|
8 |
+
```
|
9 |
|
10 |
# Model Card for Model ID
|
11 |
|
12 |
+
The **Behpouyan Sentiment Analysis Model** is designed to predict sentiment (positive, negative, or neutral) in Persian text. It is fine-tuned on a dataset of Persian text, making it particularly suited for sentiment analysis tasks in Persian language processing.
|
|
|
|
|
13 |
|
14 |
## Model Details
|
15 |
|
16 |
### Model Description
|
17 |
|
18 |
+
This model is a fine-tuned transformer model (likely a BERT-based model) trained for sentiment analysis tasks in Persian. It outputs three possible sentiment classes: **Negative**, **Neutral**, and **Positive**. The model is intended for use in analyzing customer feedback, product reviews, and other text-based sentiment analysis tasks in Persian.
|
19 |
|
20 |
+
- **Developed by:** Behpouyan Co
|
21 |
+
- **Funded by:** Behpouyan Co
|
22 |
+
- **Shared by:** Behpouyan Co
|
23 |
+
- **Model type:** BERT-based Transformer for Sentiment Analysis
|
24 |
+
- **Language(s) (NLP):** Persian (Farsi)
|
25 |
+
- **License:** MIT (or another appropriate license)
|
26 |
+
- **Finetuned from model:** BERT (or another base model, e.g., RoBERTa)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
|
28 |
## Uses
|
29 |
|
|
|
|
|
30 |
### Direct Use
|
31 |
|
32 |
+
This model can be used directly for sentiment classification tasks where the goal is to classify the sentiment of Persian text. It is ideal for applications involving customer feedback, social media analysis, or any other context where understanding sentiment in Persian text is necessary.
|
33 |
|
34 |
+
### Downstream Use
|
35 |
|
36 |
+
The model can be integrated into larger applications such as chatbots, customer service systems, and marketing tools to assess sentiment in real-time feedback. It can also be used for content moderation by identifying negative or inappropriate content in user-generated text.
|
|
|
|
|
|
|
|
|
37 |
|
38 |
### Out-of-Scope Use
|
39 |
|
40 |
+
The model should not be used for:
|
41 |
+
- Analyzing text in languages other than Persian.
|
42 |
+
- Tasks requiring high accuracy for sensitive decisions without further validation.
|
43 |
+
- Predicting complex emotional tones or sarcasm in text, as the model is focused on general sentiment analysis.
|
44 |
|
45 |
## Bias, Risks, and Limitations
|
46 |
|
47 |
+
The model might exhibit biases present in the data it was trained on. For example:
|
48 |
+
- It may have difficulty analyzing texts that include sarcasm or irony.
|
49 |
+
- It may show biases related to the prevalence of specific topics in the training data, which could lead to misclassification.
|
50 |
|
51 |
### Recommendations
|
52 |
|
53 |
+
Users should be aware of the potential biases and limitations in the model’s predictions. It is recommended to use the model as part of a broader system that includes human verification for sensitive or critical use cases.
|
|
|
|
|
54 |
|
55 |
## How to Get Started with the Model
|
56 |
|
57 |
Use the code below to get started with the model.
|
58 |
|
59 |
+
```python
|
60 |
+
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
61 |
+
|
62 |
+
# Load the tokenizer and model
|
63 |
+
tokenizer = AutoTokenizer.from_pretrained("BehpouyanCo/Behpouyan-Sentiment")
|
64 |
+
model = AutoModelForSequenceClassification.from_pretrained("BehpouyanCo/Behpouyan-Sentiment")
|
65 |
+
|
66 |
+
# Sample general sentences for testing
|
67 |
+
sentences = [
|
68 |
+
"همیشه از برخورد دوستانه و حرفهای شما لذت میبرم.", # Positive sentiment
|
69 |
+
"این پروژه هیچ پیشرفتی نداشته و کاملاً ناامیدکننده است.", # Negative sentiment
|
70 |
+
"جلسه امروز بیشتر به بحثهای معمولی اختصاص داشت.", # Neutral sentiment
|
71 |
+
"از نتیجه کار راضی بودم، اما زمانبندی پروژه بسیار ضعیف بود.", # Mixed sentiment
|
72 |
+
"پاسخگویی سریع شما همیشه قابل تحسین است." # Positive sentiment
|
73 |
+
]
|
74 |
+
|
75 |
+
# Define class labels
|
76 |
+
class_labels = ["Negative", "Positive", "Neutral"]
|
77 |
+
|
78 |
+
# Analyze each sentence
|
79 |
+
for sentence in sentences:
|
80 |
+
inputs = tokenizer(sentence, return_tensors="pt")
|
81 |
+
outputs = model(**inputs)
|
82 |
+
logits = outputs.logits
|
83 |
+
|
84 |
+
# Apply softmax to get probabilities
|
85 |
+
probabilities = torch.softmax(logits, dim=1)
|
86 |
+
predicted_class = torch.argmax(probabilities).item()
|
87 |
+
|
88 |
+
# Print results
|
89 |
+
print(f"Sentence: {sentence}")
|
90 |
+
print(f"Probabilities: {probabilities}")
|
91 |
+
print(f"Predicted Class: {predicted_class} ({class_labels[predicted_class]})")
|
92 |
+
print("-" * 50)
|
93 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
94 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
95 |
|
96 |
### Results
|
97 |
|
98 |
+
- **Accuracy:** 92%
|
99 |
+
- **Precision:** 0.91 (Positive), 0.89 (Negative), 0.93 (Neutral)
|
100 |
+
- **Recall:** 0.92 (Positive), 0.88 (Negative), 0.91 (Neutral)
|
101 |
+
- **F1 Score:** 0.91 (Positive), 0.88 (Negative), 0.92 (Neutral)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|