abullard1 commited on
Commit
f48e248
1 Parent(s): 27be02e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -1
README.md CHANGED
@@ -40,4 +40,41 @@ model-index:
40
  - name: F1-score
41
  type: f1
42
  value: 0.794
43
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
  - name: F1-score
41
  type: f1
42
  value: 0.794
43
+ ---
44
+
45
+ # Fine-tuned ALBERT Model for Constructiveness Detection in Steam Reviews
46
+
47
+ ## Model Summary
48
+
49
+ This model is a fine-tuned version of **albert-base-v2**, designed to classify whether Steam game reviews are constructive or non-constructive. The model was trained on the [1.5K Steam Reviews Binary Labeled for Constructiveness dataset](https://huggingface.co/datasets/abullard1/steam-reviews-constructiveness-binary-label-annotations-1.5k), which consists of user-generated game reviews (along other features) labeled with binary labels (`1 for constructive` or `0 for non-constructive`).
50
+ The datasets featues were concatenated into Strings with the following format: "Review: **{review}**, Playtime: **{author_playtime_at_review}**, Voted Up: **{voted_up}**, Upvotes: **{votes_up}**, Votes Funny: **{votes_funny}**" and then fed to the model accompanied by the respective ***constructive*** labels. This approach of concatenating the features into a simple String offers a good trade-off between complexity and performance, compared to other options.
51
+
52
+ ### Intended Use
53
+
54
+ The model can be applied in any scenario where it's important to distinguish between helpful and unhelpful textual feedback, particularly in the context of gaming communities or online reviews. Potential use cases are platforms like **Steam**, **Discord**, or any community-driven feedback systems where understanding the quality of feedback is critical.
55
+
56
+ ### Limitations
57
+
58
+ The model may be less effective in domains outside of gaming, as it was trained specifically on Steam reviews. Additionally, a slightly **imbalanced dataset** was used for training (approximately 63% non-constructive, 37% constructive).
59
+
60
+ ## Evaluation Results
61
+
62
+ The model was trained and evaluated using an 80/10/10 Train/Dev/Test split, achieving the following performance metrics during evaluation using the test set:
63
+
64
+ - **Accuracy**: 0.80
65
+ - **Precision**: 0.80
66
+ - **Recall**: 0.82
67
+ - **F1-score**: 0.79
68
+
69
+ These results indicate that the model performs reasonably well at identifying the correct label. (~80%)
70
+
71
+ ## How to Use
72
+
73
+ You can use this model with the Hugging Face `pipeline` API for easy classification. Here's how to do it in Python:
74
+
75
+ ```python
76
+ from transformers import pipeline
77
+
78
+ classifier = pipeline("text-classification", model="abullard1/roberta-steam-review-constructiveness-classifier")
79
+ result = classifier("Review: Bad. Really bad. Kinda., Playtime: 4, Voted Up: False, Upvotes: 2, Votes Funny: 0")
80
+ print(result)