q3fer commited on
Commit
f8e1ae3
·
1 Parent(s): 7e24b37

Add pipeline and full example usage

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md CHANGED
@@ -12,6 +12,63 @@ This model is a fine-tuned version of [distilbert-base-uncased](https://huggingf
12
 
13
  The model is fine-tuned for text classification of logical fallacies. There are a total of 14 classes: ad hominem, ad populum, appeal to emotion, circular reasoning, equivocation, fallacy of credibility, fallacy of extension, fallacy of logic, fallacy of relevance, false causality, false dilemma, faulty generalization, intentional, and miscellaneous.
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ## Training and evaluation data
16
 
17
  The [Logical Fallacy Dataset](https://github.com/causalNLP/logical-fallacy) is used for training and evaluation.
 
12
 
13
  The model is fine-tuned for text classification of logical fallacies. There are a total of 14 classes: ad hominem, ad populum, appeal to emotion, circular reasoning, equivocation, fallacy of credibility, fallacy of extension, fallacy of logic, fallacy of relevance, false causality, false dilemma, faulty generalization, intentional, and miscellaneous.
14
 
15
+ ## Example Pipeline
16
+
17
+ ```
18
+ from transformers import pipeline
19
+
20
+ text = "We know that the earth is flat because it looks and feels flat."
21
+ model_path = "q3fer/distilbert-base-fallacy-classification"
22
+ pipe = pipeline("text-classification", model=model_path, tokenizer=model_path)
23
+ pipe(text)
24
+ ```
25
+
26
+ ```
27
+ [{'label': 'circular reasoning', 'score': 0.951125979423523}]
28
+ ```
29
+
30
+ ## Full Classification Example
31
+
32
+ ```
33
+ import torch
34
+ from transformers import AutoTokenizer
35
+ from transformers import AutoModelForSequenceClassification
36
+
37
+ model = AutoModelForSequenceClassification.from_pretrained("q3fer/distilbert-base-fallacy-classification")
38
+ tokenizer = AutoTokenizer.from_pretrained("q3fer/distilbert-base-fallacy-classification")
39
+
40
+ text = "We know that the earth is flat because it looks and feels flat."
41
+ inputs = tokenizer(text, return_tensors='pt')
42
+
43
+ with torch.no_grad():
44
+ logits = model(**inputs)
45
+ scores = logits[0][0]
46
+ scores = torch.nn.Softmax(dim=0)(scores)
47
+
48
+ _, ranking = torch.topk(scores, k=scores.shape[0])
49
+ ranking = ranking.tolist()
50
+
51
+ results = [f"{i+1}) {model.config.id2label[ranking[i]]} {scores[ranking[i]]:.4f}" for i in range(scores.shape[0])]
52
+ print('\n'.join(results))
53
+ ```
54
+
55
+ ```
56
+ 1) circular reasoning 0.9511
57
+ 2) fallacy of logic 0.0154
58
+ 3) equivocation 0.0080
59
+ 4) fallacy of credibility 0.0069
60
+ 5) ad populum 0.0028
61
+ 6) fallacy of extension 0.0025
62
+ 7) intentional 0.0024
63
+ 8) faulty generalization 0.0021
64
+ 9) appeal to emotion 0.0021
65
+ 10) fallacy of relevance 0.0019
66
+ 11) false dilemma 0.0017
67
+ 12) ad hominem 0.0013
68
+ 13) false causality 0.0012
69
+ 14) miscellaneous 0.0004
70
+ ```
71
+
72
  ## Training and evaluation data
73
 
74
  The [Logical Fallacy Dataset](https://github.com/causalNLP/logical-fallacy) is used for training and evaluation.