Newvel commited on
Commit
be804cc
·
verified ·
1 Parent(s): f8ab2a0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +210 -112
README.md CHANGED
@@ -1,115 +1,213 @@
 
1
  library_name: transformers
2
  tags:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
 
4
- text-classification
5
- bert
6
- pytorch
7
- tasks:
8
- text-classification
9
-
10
-
11
- Model Card for Book Reviews Classification Model
12
- Model Details
13
- Model Description
14
- This is a BERT-based text classification model that has been fine-tuned on the Amazon book reviews dataset to classify book reviews into star ratings (1-5).
15
-
16
- Developed by: Anthropic
17
- Funded by: N/A
18
- Shared by: Newviel
19
- Model type: Text Classification
20
- Language(s): English
21
- License: Apache 2.0
22
- Finetuned from model: bert-base-uncased
23
-
24
- Model Sources
25
-
26
- Repository: https://huggingface.co/newviel/book_reviews_model
27
- Paper: N/A
28
- Demo: N/A
29
-
30
- Uses
31
- Direct Use
32
- This model can be used to classify book reviews into star ratings (1-5) based on the review text. It could be used as a standalone model or integrated into a larger application that deals with book reviews.
33
- Downstream Use
34
- The model could be fine-tuned further on additional book review data to improve its performance. It could also be used as a starting point for building other text classification models, such as for product reviews, movie reviews, or any other text-based categorization task.
35
- Out-of-Scope Use
36
- This model is trained specifically on book reviews and may not perform well on other types of text classification tasks, such as sentiment analysis, topic classification, or language translation. Users should be cautious about applying the model outside of the book reviews domain.
37
- Bias, Risks, and Limitations
38
- The model was trained on the Amazon book reviews dataset, which may have inherent biases based on the demographics and preferences of Amazon customers. The model's performance may also be limited to the specific writing styles and vocabulary found in book reviews.
39
- Recommendations
40
- Users should be aware of the potential biases and limitations of the model, and carefully evaluate its performance on their specific use case before deploying it. It is recommended to fine-tune the model on additional data or customize the model architecture if the use case differs significantly from the original training data.
41
- How to Get Started with the Model
42
- To use the model, you can follow the example code below:
43
- pythonCopyfrom transformers import pipeline
44
-
45
- classifier = pipeline("text-classification", model="newviel/book_reviews_model")
46
- result = classifier("This book was an incredible read!")
47
- print(result)
48
- Training Details
49
- Training Data
50
- The model was trained on the Amazon book reviews dataset, which contains customer reviews for books sold on Amazon. The dataset contains over 130 million reviews spanning a wide range of book genres and topics.
51
- Training Procedure
52
- Preprocessing
53
- The text data was preprocessed by tokenizing the reviews using the BERT tokenizer, and converting the star ratings to a numerical label (1-5).
54
- Training Hyperparameters
55
-
56
- Training regime: FP32 mixed precision
57
- Learning rate: 5e-05
58
- Train batch size: 8
59
- Eval batch size: 8
60
- Epochs: 3
61
- Optimizer: AdamW with betas=(0.9, 0.999) and epsilon=1e-08
62
- LR Scheduler: Linear
63
-
64
- Speeds, Sizes, Times
65
-
66
- Model size: 438 MB
67
- Training time: Approximately 40 minutes
68
-
69
- Evaluation
70
- Testing Data, Factors & Metrics
71
- Testing Data
72
- The model was evaluated on a held-out portion of the Amazon book reviews dataset, which was not used for training.
73
- Factors
74
- The evaluation focused on the model's performance in classifying book reviews into the correct star rating (1-5).
75
- Metrics
76
-
77
- Accuracy: 0.7537
78
- Evaluation Loss: 0.7654
79
-
80
- Results
81
- The model achieved an accuracy of 0.7537 and an evaluation loss of 0.7654 on the held-out test set, indicating a reasonably strong performance on the book review classification task.
82
- Summary
83
- The model is capable of classifying book reviews into star ratings with a high degree of accuracy, making it suitable for use in applications that deal with book reviews. However, users should be aware of the potential biases and limitations of the model, and consider further fine-tuning or customization if necessary.
84
- Environmental Impact
85
- Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
86
-
87
- Hardware Type: NVIDIA V100 GPU
88
- Hours used: 0.67 hours
89
- Cloud Provider: Google Cloud Platform
90
- Compute Region: us-east1
91
- Carbon Emitted: 0.25 kgCO2eq
92
-
93
- Technical Specifications
94
- Model Architecture and Objective
95
- The model architecture is based on the BERT-base-uncased model, with a linear classification head added on top to perform the star rating prediction task.
96
- Compute Infrastructure
97
- Hardware
98
- The model was trained on a single NVIDIA V100 GPU.
99
- Software
100
-
101
- Framework: PyTorch 2.5.1+cu121
102
- Transformers Version: 4.46.2
103
- Datasets Version: 3.1.0
104
- Tokenizers Version: 0.20.3
105
-
106
- Citation
107
- No published paper or blog post to cite.
108
- Glossary
109
- N/A
110
- More Information
111
- N/A
112
- Model Card Authors
113
- This model card was generated by Anthropic.
114
- Model Card Contact
115
- For more information or issues, please contact newvel-website@hotmail.com
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
  library_name: transformers
3
  tags:
4
+ - art
5
+ - reading
6
+ - reviews
7
+ - sentiment
8
+ license: apache-2.0
9
+ datasets:
10
+ - McAuley-Lab/Amazon-Reviews-2023
11
+ language:
12
+ - en
13
+ metrics:
14
+ - accuracy
15
+ base_model:
16
+ - google-bert/bert-base-uncased
17
+ pipeline_tag: text-classification
18
+ ---
19
 
20
+ # Model Card for Model ID
21
+
22
+ <!-- Provide a quick summary of what the model is/does. -->
23
+
24
+
25
+
26
+ ## Model Details
27
+
28
+ ### Model Description
29
+
30
+ <!-- Provide a longer summary of what this model is. -->
31
+
32
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
33
+
34
+ - **Developed by:** [More Information Needed]
35
+ - **Funded by [optional]:** [More Information Needed]
36
+ - **Shared by [optional]:** [More Information Needed]
37
+ - **Model type:** [More Information Needed]
38
+ - **Language(s) (NLP):** [More Information Needed]
39
+ - **License:** [More Information Needed]
40
+ - **Finetuned from model [optional]:** [More Information Needed]
41
+
42
+ ### Model Sources [optional]
43
+
44
+ <!-- Provide the basic links for the model. -->
45
+
46
+ - **Repository:** [More Information Needed]
47
+ - **Paper [optional]:** [More Information Needed]
48
+ - **Demo [optional]:** [More Information Needed]
49
+
50
+ ## Uses
51
+
52
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
53
+
54
+ ### Direct Use
55
+
56
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
57
+
58
+ [More Information Needed]
59
+
60
+ ### Downstream Use [optional]
61
+
62
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
63
+
64
+ [More Information Needed]
65
+
66
+ ### Out-of-Scope Use
67
+
68
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
69
+
70
+ [More Information Needed]
71
+
72
+ ## Bias, Risks, and Limitations
73
+
74
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
75
+
76
+ [More Information Needed]
77
+
78
+ ### Recommendations
79
+
80
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
81
+
82
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
83
+
84
+ ## How to Get Started with the Model
85
+
86
+ Use the code below to get started with the model.
87
+
88
+ [More Information Needed]
89
+
90
+ ## Training Details
91
+
92
+ ### Training Data
93
+
94
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
95
+
96
+ [More Information Needed]
97
+
98
+ ### Training Procedure
99
+
100
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
101
+
102
+ #### Preprocessing [optional]
103
+
104
+ [More Information Needed]
105
+
106
+
107
+ #### Training Hyperparameters
108
+
109
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
110
+
111
+ #### Speeds, Sizes, Times [optional]
112
+
113
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
114
+
115
+ [More Information Needed]
116
+
117
+ ## Evaluation
118
+
119
+ <!-- This section describes the evaluation protocols and provides the results. -->
120
+
121
+ ### Testing Data, Factors & Metrics
122
+
123
+ #### Testing Data
124
+
125
+ <!-- This should link to a Dataset Card if possible. -->
126
+
127
+ [More Information Needed]
128
+
129
+ #### Factors
130
+
131
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
132
+
133
+ [More Information Needed]
134
+
135
+ #### Metrics
136
+
137
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
138
+
139
+ [More Information Needed]
140
+
141
+ ### Results
142
+
143
+ [More Information Needed]
144
+
145
+ #### Summary
146
+
147
+
148
+
149
+ ## Model Examination [optional]
150
+
151
+ <!-- Relevant interpretability work for the model goes here -->
152
+
153
+ [More Information Needed]
154
+
155
+ ## Environmental Impact
156
+
157
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
158
+
159
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
160
+
161
+ - **Hardware Type:** [More Information Needed]
162
+ - **Hours used:** [More Information Needed]
163
+ - **Cloud Provider:** [More Information Needed]
164
+ - **Compute Region:** [More Information Needed]
165
+ - **Carbon Emitted:** [More Information Needed]
166
+
167
+ ## Technical Specifications [optional]
168
+
169
+ ### Model Architecture and Objective
170
+
171
+ [More Information Needed]
172
+
173
+ ### Compute Infrastructure
174
+
175
+ [More Information Needed]
176
+
177
+ #### Hardware
178
+
179
+ [More Information Needed]
180
+
181
+ #### Software
182
+
183
+ [More Information Needed]
184
+
185
+ ## Citation [optional]
186
+
187
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
188
+
189
+ **BibTeX:**
190
+
191
+ [More Information Needed]
192
+
193
+ **APA:**
194
+
195
+ [More Information Needed]
196
+
197
+ ## Glossary [optional]
198
+
199
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
200
+
201
+ [More Information Needed]
202
+
203
+ ## More Information [optional]
204
+
205
+ [More Information Needed]
206
+
207
+ ## Model Card Authors [optional]
208
+
209
+ [More Information Needed]
210
+
211
+ ## Model Card Contact
212
+
213
+ [More Information Needed]