Token Classification
GLiNER
PyTorch
multilingual
NER
GLiNER
information extraction
encoder
entity recognition
File size: 6,372 Bytes
296c54a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bd26f41
 
 
 
48a0771
 
 
 
 
296c54a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f4c81d5
 
 
296c54a
 
20921d4
296c54a
 
 
 
 
a837fbb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
296c54a
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
---
license: apache-2.0
language:
- multilingual
library_name: gliner
datasets:
- urchade/pile-mistral-v0.1
- knowledgator/GLINER-multi-task-synthetic-data
- EmergentMethods/AskNews-NER-v0
pipeline_tag: token-classification
tags:
- NER
- GLiNER
- information extraction
- encoder
- entity recognition
---
# About

GLiNER is a Named Entity Recognition (NER) model capable of identifying any entity type using a bidirectional transformer encoders (BERT-like). It provides a practical alternative to traditional NER models, which are limited to predefined entities, and Large Language Models (LLMs) that, despite their flexibility, are costly and large for resource-constrained scenarios.

The initial versions of GLiNER relied on older encoder architectures like BERT and DeBERTA. These models, however, were trained on smaller datasets and lacked support for modern optimization techniques such as flash attention. Additionally, their context window was typically limited to 512 tokens, which is insufficient for many practical applications. Recognizing these limitations, we began exploring alternative backbones for GLiNER.

This latest model leverages the LLM2Vec approach, transforming the initial decoder model into a bidirectional encoder. We further enhanced the model by pre-training it on the masked token prediction task using the Wikipedia corpus. This approach introduces several advancements for GLiNER, including support for flash attention, an extended context window, and faster inference times. Additionally, by utilizing modern decoders trained on large, up-to-date datasets, the model exhibits improved generalization and performance.

Key Advantages Over Previous GLiNER Models:

* Enhanced performance and generalization capabilities
* Support for Flash Attention
* Extended context window (up to 32k tokens)

While these models are larger and require more computational resources compared to older encoders, they are still considered relatively small given current standards and provide significant benefits for a wide range of use cases.

### Installation & Usage
Install or update the gliner package:
```bash
pip install gliner -U
```
And LLM2Vec package:
```bash
pip install llm2vec
```
To use this particular Qwen-based model you need different `transformers` package version than llm2vec requires, so install it manually:
```bash
pip install transformers==4.44.1
```

Once you've downloaded the GLiNER library, you can import the GLiNER class. You can then load this model using `GLiNER.from_pretrained` and predict entities with `predict_entities`.

```python
from gliner import GLiNER

model = GLiNER.from_pretrained("knowledgator/gliner-qwen-1.5B-v1.0")

text = """
Cristiano Ronaldo dos Santos Aveiro (Portuguese pronunciation: [kɾiʃˈtjɐnu ʁɔˈnaldu]; born 5 February 1985) is a Portuguese professional footballer who plays as a forward for and captains both Saudi Pro League club Al Nassr and the Portugal national team. Widely regarded as one of the greatest players of all time, Ronaldo has won five Ballon d'Or awards,[note 3] a record three UEFA Men's Player of the Year Awards, and four European Golden Shoes, the most by a European player. He has won 33 trophies in his career, including seven league titles, five UEFA Champions Leagues, the UEFA European Championship and the UEFA Nations League. Ronaldo holds the records for most appearances (183), goals (140) and assists (42) in the Champions League, goals in the European Championship (14), international goals (128) and international appearances (205). He is one of the few players to have made over 1,200 professional career appearances, the most by an outfield player, and has scored over 850 official senior career goals for club and country, making him the top goalscorer of all time.
"""

labels = ["person", "award", "date", "competitions", "teams"]

entities = model.predict_entities(text, labels, threshold=0.5)

for entity in entities:
    print(entity["text"], "=>", entity["label"])
```

```
Cristiano Ronaldo dos Santos Aveiro => person
5 February 1985 => date
Al Nassr => teams
Portugal national team => teams
Ballon d'Or => award
UEFA Men's Player of the Year Awards => award
European Golden Shoes => award
UEFA Champions Leagues => competitions
UEFA European Championship => competitions
UEFA Nations League => competitions
Champions League => competitions
European Championship => competitions
```

If you want to use flash attention or increase sequence length, please, check the following code:
```python
from gliner import GLiNER
import torch

model = GLiNER.from_pretrained("knowledgator/gliner-qwen-1.5B-v1.0",
                                _attn_implementation = 'flash_attention_2',
                                                max_length = 2048).to('cuda:0', dtype=torch.float16)
```

### Benchmarks
Below you can see the table with benchmarking results on various named entity recognition datasets:

| Dataset                     | Score  |
|-----------------------------|--------|
| ACE 2004                    | 29.8%  |
| ACE 2005                    | 26.8%  |
| AnatEM                      | 43.7%  |
| Broad Tweet Corpus          | 68.3%  |
| CoNLL 2003                  | 67.5%  |
| FabNER                      | 24.9%  |
| FindVehicle                 | 33.2%  |
| GENIA_NER                   | 58.8%  |
| HarveyNER                   | 19.5%  |
| MultiNERD                   | 65.1%  |
| Ontonotes                   | 39.9%  |
| PolyglotNER                 | 45.8%  |
| TweetNER7                   | 37.0%  |
| WikiANN en                  | 56.0%  |
| WikiNeural                  | 78.3%  |
| bc2gm                       | 58.1%  |
| bc4chemd                    | 65.7%  |
| bc5cdr                      | 72.3%  |
| ncbi                        | 63.3%  |
| **Average**                 | **50.2%** |
|                             |        |
| CrossNER_AI                 | 58.3%  |
| CrossNER_literature         | 64.4%  |
| CrossNER_music              | 71.5%  |
| CrossNER_politics           | 70.5%  |
| CrossNER_science            | 65.1%  |
| mit-movie                   | 47.5%  |
| mit-restaurant              | 33.1%  |
| **Average (zero-shot benchmark)** | **58.6%** |

### Join Our Discord

Connect with our community on Discord for news, support, and discussion about our models. Join [Discord](https://discord.gg/dkyeAgs9DG).