File size: 1,971 Bytes
f9f78a9 fd7894e 0c714ba 7d53171 0c714ba 7d53171 fd7894e 7d53171 18c38c4 89883b6 85fb178 36b5802 df9298c 36b5802 df9298c 36b5802 df9298c 36b5802 c43ed45 36b5802 18c38c4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
---
license: mit
---
## BERT-based Text Classification Model
This model is a fine-tuned version of the bert-base-uncased model, specifically adapted for text classification across a diverse set of categories. The model has been trained on a dataset collected from multiple sources, including the News Category Dataset on Kaggle and various other websites.
The model classifies text into one of the following 12 categories:
* Food
* Videogames & Shows
* Kids and fun
* Homestyle
* Travel
* Health
* Charity
* Electronics & Technology
* Sports
* Cultural & Music
* Education
* Convenience
The model has demonstrated robust performance with an accuracy of 0.721459, F1 score of 0.659451, precision of 0.707620, and recall of 0.635155.
## Model Architecture
The model leverages the BertForSequenceClassification architecture, It has been fine-tuned on the aforementioned dataset, with the following key configuration parameters:
* Hidden size: 768
* Number of attention heads: 12
* Number of hidden layers: 12
* Max position embeddings: 512
* Type vocab size: 2
* Vocab size: 30522
* The model uses the GELU activation function in its hidden layers and applies dropout with a probability of 0.1 to the attention probabilities to prevent overfitting.
## Example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import numpy as np
from scipy.special import expit
MODEL = "PavanDeepak/Topic_Classification"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
class_mapping = model.config.id2label
text = "I love chicken manchuria"
tokens = tokenizer(text, return_tensors="pt")
output = model(**tokens)
scores = output.logits[0][0].detach().numpy()
scores = expit(scores)
predictions = (scores >= 0.5) * 1
for i in range(len(predictions)):
if predictions[i]:
print(class_mapping[i])
```
## Output:
* Food
* Videogames & Shows
* Homestyle
* Travel
* Health |