Token Classification
GLiNER
PyTorch
English
NER
GLiNER
information extraction
encoder
entity recognition
modernbert
Ihor commited on
Commit
85b4ebf
·
verified ·
1 Parent(s): f3b67dc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -0
README.md CHANGED
@@ -28,6 +28,8 @@ Such architecture brings several advantages over uni-encoder GLiNER:
28
 
29
  Utilization of ModernBERT uncovers up to 3 times better efficiency in comparison to DeBERTa-based models and context length up to 8,192 tokens while demonstrating comparable results.
30
 
 
 
31
  However, bi-encoder architecture has some drawbacks such as a lack of inter-label interactions that make it hard for the model to disambiguate semantically similar but contextually different entities.
32
 
33
  ### Installation & Usage
@@ -98,6 +100,9 @@ outputs = model.batch_predict_with_embeds(texts, entity_embeddings, labels)
98
  ```
99
 
100
  ### Benchmarks
 
 
 
101
  Below you can see the table with benchmarking results on various named entity recognition datasets:
102
 
103
  | Dataset | Score |
 
28
 
29
  Utilization of ModernBERT uncovers up to 3 times better efficiency in comparison to DeBERTa-based models and context length up to 8,192 tokens while demonstrating comparable results.
30
 
31
+ ![inference time comparison](modernbert_inference_time.png "Inference time comparison")
32
+
33
  However, bi-encoder architecture has some drawbacks such as a lack of inter-label interactions that make it hard for the model to disambiguate semantically similar but contextually different entities.
34
 
35
  ### Installation & Usage
 
100
  ```
101
 
102
  ### Benchmarks
103
+
104
+ ![results on different datasets](modernbert_benchmarking.png "Results on different datasets")
105
+
106
  Below you can see the table with benchmarking results on various named entity recognition datasets:
107
 
108
  | Dataset | Score |