DeDeckerThomas commited on
Commit
4cebcbc
1 Parent(s): f87cafb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +96 -0
README.md CHANGED
@@ -1,6 +1,102 @@
1
  ---
 
2
  language: en
3
  license: mit
4
  datasets:
5
  - midas/inspec
 
 
 
 
 
6
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+
3
  language: en
4
  license: mit
5
  datasets:
6
  - midas/inspec
7
+ tags:
8
+ - keyphrase-extraction
9
+ metric:
10
+ - f1
11
+
12
  ---
13
+ ** Work in progress **
14
+ # 🔑 Keyphrase Extraction model: KBIR-inspec
15
+ Keyword extraction is a technique in text analysis where you extract the important keywords from a text. Since this is a time-consuming process, Artificial Intelligence is used to automate it.
16
+ Currently, classical machine learning methods, that use statistics and linguistics, are widely used for the extraction process. The fact that these methods have been widely used in the community has the advantage that there are many easy-to-use libraries.
17
+ Now with the recent innovations in deep learning methods (such as recurrent neural networks and transformers, GANS, …), keyword extraction can be improved. These new methods also focus on the semantics and context of a document, which is quite an improvement.
18
+
19
+
20
+ ## 📓 Model Description
21
+ KBIR pre-trained model fine-tuned on the Inspec dataset. KBIR
22
+ Keyphrase Boundary Infilling with Replacement (KBIR) which utilizes a multi-task learning setup for optimizing a combined loss of Masked Language Modeling (MLM), Keyphrase Boundary Infilling (KBI) and Keyphrase Replacement Classification (KRC).
23
+ Paper: https://arxiv.org/abs/2112.08547
24
+
25
+ ## ✋ Intended uses & limitations
26
+ ### ❓ How to use
27
+ ```python
28
+ # Define post_process functions
29
+ def concat_tokens_by_tag(keywords):
30
+ keyphrase_tokens = []
31
+ for id, label in keywords:
32
+ if label == "B":
33
+ keyphrase_tokens.append([id])
34
+ elif label == "I":
35
+ if len(keyphrase_tokens) > 0:
36
+ keyphrase_tokens[len(keyphrase_tokens) - 1].append(id)
37
+ return keyphrase_tokens
38
+
39
+
40
+ def extract_keyphrases(example, predictions, tokenizer, index=0):
41
+ keyphrases_list = [
42
+ (id, idx2label[label])
43
+ for id, label in zip(
44
+ np.array(example["input_ids"]).squeeze().tolist(), predictions[index]
45
+ )
46
+ if idx2label[label] in ["B", "I"]
47
+ ]
48
+
49
+ processed_keyphrases = concat_tokens_by_tag(keyphrases_list)
50
+ extracted_kps = tokenizer.batch_decode(
51
+ processed_keyphrases,
52
+ skip_special_tokens=True,
53
+ clean_up_tokenization_spaces=True,
54
+ )
55
+ return np.unique([kp.strip() for kp in extracted_kps])
56
+
57
+ # Load model and tokenizer
58
+ model_name = "DeDeckerThomas/keyphrase-extraction-kbir-inspec"
59
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
60
+ model = AutoModelForTokenClassification.from_pretrained(model_name)
61
+
62
+ # Inference
63
+ text = """
64
+ """.replace(
65
+ "\n", ""
66
+ )
67
+
68
+ encoded_input = tokenizer(
69
+ text.split(" "),
70
+ is_split_into_words=True,
71
+ truncation=True,
72
+ padding="max_length",
73
+ max_length=max_length,
74
+ return_tensors="pt",
75
+ )
76
+
77
+ output = model(**encoded_input)
78
+ logits = output.logits.detach().numpy()
79
+ predictions = np.argmax(logits, axis=2)
80
+
81
+ extracted_kps = extract_keyphrases(encoded_input, predictions, tokenizer)
82
+
83
+ print("***** Input Document *****")
84
+ print(text)
85
+
86
+ print("***** Prediction *****")
87
+ print(extracted_kps)
88
+ ```
89
+
90
+ ## 📚 Trainig Dataset
91
+ ## 👷‍♂️ Training procedure
92
+
93
+ ### Preprocessing
94
+ ## 📝Evaluation results
95
+
96
+ The model achieves the following results on the Inspec test set:
97
+
98
+ | Dataset | P@5 | R@5 | F1@5 | P@10 | R@10 | F1@10 | P@M | R@M | F1@M |
99
+ |:-----------------:|:----:|:----:|:----:|:----:|:----:|:-----:|:----:|:----:|:----:|
100
+ | Inspec Test Set | 0.53 | 0.47 | 0.46 | 0.36 | 0.58 | 0.41 | 0.58 | 0.60 | 0.56 |
101
+
102
+ ### BibTeX entry and citation info