File size: 1,229 Bytes
42ce853
 
a6cf468
 
4ffce0c
 
 
f8e3803
 
e313132
42ce853
 
 
ada1c35
42ce853
 
 
 
 
 
 
 
a6cf468
 
ada1c35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1b9feb9
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
pipeline_tag: text-classification
metrics:
- accuracy
license: mit
datasets:
- mteb/twentynewsgroups-clustering
language:
- en
library_name: sklearn
---
# BERT Text Classification Model

This is a simple model for text classification using BERT.

## Usage

To use the model, you can call the `classify_text` function with a text input, and it will return the predicted class label.

```python
text = "This is a positive review."
predicted_class = classify_text(text)
print("Predicted class:", predicted_class)

from transformers import BertTokenizer, BertForSequenceClassification

# Load pre-trained BERT tokenizer and model
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForSequenceClassification.from_pretrained('bert-base-uncased')

# Define a function to classify text
def classify_text(text):
    inputs = tokenizer(text, return_tensors='pt', padding=True, truncation=True)
    outputs = model(**inputs)
    logits = outputs.logits
    probabilities = logits.softmax(dim=1)
    predicted_class = probabilities.argmax(dim=1).item()
    return predicted_class

# Example usage
text = "This is a positive review."
predicted_class = classify_text(text)
print("Predicted class:", predicted_class)