joaogante HF staff commited on
Commit
11270db
1 Parent(s): e8717c2

Model Card with TensorFlow example

Browse files

This PRs adds a TensorFlow example that mimics the PT example, that uses the newly added TF weights.

PT example outputs:
```
0.9156370162963867 Around 9 Million people live in London
0.49475783109664917 London is known for its financial district
```

TF example outputs:
```
0.9156371355056763 Around 9 Million people live in London
0.49475765228271484 London is known for its financial district
```

Files changed (1) hide show
  1. README.md +59 -2
README.md CHANGED
@@ -46,7 +46,7 @@ for doc, score in doc_score_pairs:
46
  ```
47
 
48
 
49
- ## Usage (HuggingFace Transformers)
50
  Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings.
51
 
52
  ```python
@@ -56,7 +56,7 @@ import torch.nn.functional as F
56
 
57
  #Mean Pooling - Take average of all tokens
58
  def mean_pooling(model_output, attention_mask):
59
- token_embeddings = model_output.last_hidden_state #First element of model_output contains all token embeddings
60
  input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
61
  return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
62
 
@@ -105,6 +105,63 @@ for doc, score in doc_score_pairs:
105
  print(score, doc)
106
  ```
107
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
108
  ## Technical Details
109
 
110
  In the following some technical details how this model must be used:
 
46
  ```
47
 
48
 
49
+ ## PyTorch Usage (HuggingFace Transformers)
50
  Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings.
51
 
52
  ```python
 
56
 
57
  #Mean Pooling - Take average of all tokens
58
  def mean_pooling(model_output, attention_mask):
59
+ token_embeddings = model_output.last_hidden_state
60
  input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
61
  return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
62
 
 
105
  print(score, doc)
106
  ```
107
 
108
+ ## TensorFlow Usage (HuggingFace Transformers)
109
+ Similarly to the PyTorch example above, to use the model with TensorFlow you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings.
110
+
111
+ ```python
112
+ from transformers import AutoTokenizer, TFAutoModel
113
+ import tensorflow as tf
114
+
115
+ #Mean Pooling - Take attention mask into account for correct averaging
116
+ def mean_pooling(model_output, attention_mask):
117
+ token_embeddings = model_output.last_hidden_state
118
+ input_mask_expanded = tf.cast(tf.tile(tf.expand_dims(attention_mask, -1), [1, 1, token_embeddings.shape[-1]]), tf.float32)
119
+ return tf.math.reduce_sum(token_embeddings * input_mask_expanded, 1) / tf.math.maximum(tf.math.reduce_sum(input_mask_expanded, 1), 1e-9)
120
+
121
+
122
+ #Encode text
123
+ def encode(texts):
124
+ # Tokenize sentences
125
+ encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='tf')
126
+
127
+ # Compute token embeddings
128
+ model_output = model(**encoded_input, return_dict=True)
129
+
130
+ # Perform pooling
131
+ embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
132
+
133
+ # Normalize embeddings
134
+ embeddings = tf.math.l2_normalize(embeddings, axis=1)
135
+
136
+ return embeddings
137
+
138
+
139
+ # Sentences we want sentence embeddings for
140
+ query = "How many people live in London?"
141
+ docs = ["Around 9 Million people live in London", "London is known for its financial district"]
142
+
143
+ # Load model from HuggingFace Hub
144
+ tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/multi-qa-MiniLM-L6-cos-v1")
145
+ model = TFAutoModel.from_pretrained("sentence-transformers/multi-qa-MiniLM-L6-cos-v1")
146
+
147
+ #Encode query and docs
148
+ query_emb = encode(query)
149
+ doc_emb = encode(docs)
150
+
151
+ #Compute dot score between query and all document embeddings
152
+ scores = (query_emb @ tf.transpose(doc_emb))[0].numpy().tolist()
153
+
154
+ #Combine docs & scores
155
+ doc_score_pairs = list(zip(docs, scores))
156
+
157
+ #Sort by decreasing score
158
+ doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
159
+
160
+ #Output passages & scores
161
+ for doc, score in doc_score_pairs:
162
+ print(score, doc)
163
+ ```
164
+
165
  ## Technical Details
166
 
167
  In the following some technical details how this model must be used: