ssmits commited on
Commit
a59c5da
·
verified ·
1 Parent(s): 4b868cd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +92 -10
README.md CHANGED
@@ -1,20 +1,102 @@
1
  ---
2
  language:
3
  - en
 
4
  tags:
5
- - embeddings
6
- - base-model
7
- - qwen
8
  license: apache-2.0
 
9
  ---
10
 
11
  # Qwen2.5-7B-Instruct-embed-base
12
 
13
- This is a base model derived from Qwen2.5-7B-Instruct with the language modeling head removed.
14
- It's intended to be used as a base for embedding tasks and further fine-tuning.
15
-
16
  ## Model Details
17
- - Base model: Qwen2.5-7B-Instruct
18
- - The 'lm_head' layer has been removed
19
- - Maintains the original model's norm layers
20
- - Suitable for embedding tasks and custom head additions
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  language:
3
  - en
4
+ pipeline_tag: text-classification
5
  tags:
6
+ - pretrained
 
 
7
  license: apache-2.0
8
+ library_name: sentence-transformers
9
  ---
10
 
11
  # Qwen2.5-7B-Instruct-embed-base
12
 
 
 
 
13
  ## Model Details
14
+ Qwen2.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
15
+
16
+ ## Requirements
17
+ The code of Qwen2.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
18
+ ```
19
+ KeyError: 'Qwen2.5'
20
+ ```
21
+
22
+ ## Usage
23
+ The 'lm_head' layer of this model has been removed, which means it can be used for embeddings. It will not perform greatly, as it needs to be further fine-tuned, as shown by [intfloat/e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct).
24
+
25
+ ## Inference
26
+ ```python
27
+ from sentence_transformers import SentenceTransformer
28
+ import torch
29
+
30
+ # 1. Load a pretrained Sentence Transformer model
31
+ model = SentenceTransformer("ssmits/Qwen2.5-7B-embed-base") # device = "cpu" when <= 24 GB VRAM
32
+
33
+ # The sentences to encode
34
+ sentences = [
35
+ "The weather is lovely today.",
36
+ "It's so sunny outside!",
37
+ "He drove to the stadium.",
38
+ ]
39
+
40
+ # 2. Calculate embeddings by calling model.encode()
41
+ embeddings = model.encode(sentences)
42
+ print(embeddings.shape)
43
+ # (3, 3584)
44
+
45
+ # 3. Calculate the embedding similarities
46
+ # Assuming embeddings is a numpy array, convert it to a torch tensor
47
+ embeddings_tensor = torch.tensor(embeddings)
48
+
49
+ # Using torch to compute cosine similarity matrix
50
+ similarities = torch.nn.functional.cosine_similarity(embeddings_tensor.unsqueeze(0), embeddings_tensor.unsqueeze(1), dim=2)
51
+
52
+ print(similarities)
53
+ # tensor([[1.0000, 0.8608, 0.6609],
54
+ # [0.8608, 1.0000, 0.7046],
55
+ # [0.6609, 0.7046, 1.0000]])
56
+ ```
57
+
58
+ Note: In my tests it utilizes more than 24GB (RTX 4090), so an A100 or A6000 would be required for inference.
59
+
60
+ ## Inference (HuggingFace Transformers)
61
+ Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
62
+
63
+ ```python
64
+ from transformers import AutoTokenizer, AutoModel
65
+ import torch
66
+
67
+ #Mean Pooling - Take attention mask into account for correct averaging
68
+ def mean_pooling(model_output, attention_mask):
69
+ token_embeddings = model_output[0] #First element of model_output contains all token embeddings
70
+ input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
71
+ return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
72
+
73
+ # Sentences we want sentence embeddings for
74
+ sentences = ['This is an example sentence', 'Each sentence is converted']
75
+
76
+ # Load model from HuggingFace Hub
77
+ tokenizer = AutoTokenizer.from_pretrained('ssmits/Qwen2.5-7B-Instruct-embed-base')
78
+ model = AutoModel.from_pretrained('ssmits/Qwen2.5-7B-Instruct-embed-base') # device = "cpu" when <= 24 GB VRAM
79
+
80
+ # Tokenize sentences
81
+ encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
82
+
83
+ # Compute token embeddings
84
+ with torch.no_grad():
85
+ model_output = model(**encoded_input)
86
+
87
+ # Perform pooling. In this case, mean pooling.
88
+ sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
89
+
90
+ print("Sentence embeddings:")
91
+ print(sentence_embeddings)
92
+ ```
93
+
94
+ ### How to enable Multi-GPU
95
+ ```python
96
+ from transformers import AutoModel
97
+ from torch.nn import DataParallel
98
+
99
+ model = AutoModel.from_pretrained("ssmits/Qwen2.5-7B-Instruct-embed-base")
100
+ for module_key, module in model._modules.items():
101
+ model._modules[module_key] = DataParallel(module)
102
+ ```