Milan97 commited on
Commit
b2b8fa1
·
verified ·
1 Parent(s): e791ddc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -11
README.md CHANGED
@@ -1,26 +1,62 @@
1
-
2
  ---
3
  tags:
4
  - autotrain
5
  - text-classification
6
  base_model: sentence-transformers/all-mpnet-base-v2
7
  widget:
8
- - text: "I love AutoTrain"
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
 
11
- # Model Trained Using AutoTrain
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
- - Problem type: Text Classification
 
 
14
 
15
- ## Validation Metrics
16
- loss: 0.0028607482090592384
 
 
17
 
18
- f1: 1.0
 
19
 
20
- precision: 1.0
 
 
21
 
22
- recall: 1.0
 
 
 
23
 
24
- auc: 1.0
 
25
 
26
- accuracy: 1.0
 
 
 
1
  ---
2
  tags:
3
  - autotrain
4
  - text-classification
5
  base_model: sentence-transformers/all-mpnet-base-v2
6
  widget:
7
+ - text: I love AutoTrain
8
+ language:
9
+ - en
10
+ pipeline_tag: text-classification
11
+ ---
12
+
13
+ # Clickbait Detection Model
14
+
15
+ This is a **custom-trained text classification model** created using Hugging Face **AutoTrain**. The model is designed to classify text into two categories:
16
+ - **Clickbait**
17
+ - **Not Clickbait**
18
+
19
+ The training was conducted using a fine-tuned version of the `sentence-transformers/all-mpnet-base-v2` base model, which is well-suited for text classification tasks.
20
+
21
  ---
22
 
23
+ ## Model Details
24
+
25
+ - **Base Model**: [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)
26
+ - **Problem Type**: Text Classification
27
+ - **Language**: English (`en`)
28
+ - **Pipeline Tag**: text-classification
29
+ - **Tags**: autotrain, text-classification
30
+
31
+ ---
32
+
33
+ ## Usage
34
+
35
+ You can use this model with Hugging Face’s `transformers` library to classify text into `clickbait` or `not clickbait`.
36
 
37
+ ### Example Code
38
+ ```python
39
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
40
 
41
+ # Load tokenizer and model
42
+ model_name = "Milan97/autotrain-9ikup-ih7yd"
43
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
44
+ model = AutoModelForSequenceClassification.from_pretrained(model_name)
45
 
46
+ # Input text
47
+ text = "You won’t believe what happened next!"
48
 
49
+ # Tokenize and perform inference
50
+ inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
51
+ outputs = model(**inputs)
52
 
53
+ # Get predicted label and confidence
54
+ logits = outputs.logits
55
+ predicted_class = logits.argmax(dim=1).item()
56
+ confidence = logits.softmax(dim=1).max().item()
57
 
58
+ # Label mapping
59
+ labels = {0: "Not Clickbait", 1: "Clickbait"}
60
 
61
+ print(f"Text: {text}")
62
+ print(f"Prediction: {labels[predicted_class]} (Confidence: {confidence:.2f})")