nreimers commited on
Commit
a71956f
·
1 Parent(s): 8492ced
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Natural Questions Models
2
+ [Google's Natural Questions dataset](https://ai.google.com/research/NaturalQuestions) constists of about 100k real search queries from Google with the respective, relevant passage from Wikipedia. Models trained on this dataset work well for question-answer retrieval.
3
+
4
+ ## Usage (Sentence Transformers)
5
+ Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
6
+ ```
7
+ pip install -U sentence-transformers
8
+ ```
9
+
10
+ Then you can use the model like this:
11
+
12
+ ```python
13
+ from sentence_transformers import SentenceTransformer, util
14
+ model = SentenceTransformer('nq-distilbert-base-v1')
15
+
16
+ query_embedding = model.encode('How many people live in London?')
17
+
18
+ #The passages are encoded as [ [title1, text1], [title2, text2], ...]
19
+ passage_embedding = model.encode([['London', 'London has 9,787,426 inhabitants at the 2011 census.']])
20
+
21
+ print("Similarity:", util.pytorch_cos_sim(query_embedding, passage_embedding))
22
+ ```
23
+
24
+ Note: For the passage, we have to encode the Wikipedia article title together with a text paragraph from that article.
25
+
26
+
27
+
28
+ ## Usage (HuggingFace Models Repository)
29
+
30
+ You can use the model directly from the model repository to compute sentence embeddings:
31
+ ```python
32
+ from transformers import AutoTokenizer, AutoModel
33
+ import torch
34
+
35
+
36
+ #Mean Pooling - Take attention mask into account for correct averaging
37
+ def mean_pooling(model_output, attention_mask):
38
+ token_embeddings = model_output[0] #First element of model_output contains all token embeddings
39
+ input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
40
+ sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
41
+ sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
42
+ return sum_embeddings / sum_mask
43
+
44
+
45
+
46
+ # Queries we want embeddings for
47
+ queries = ['What is the capital of France?', 'How many people live in New York City?']
48
+
49
+ # Passages that provide answers
50
+ titles = ['Paris', 'New York City']
51
+ passages = ['Paris is the capital of France', 'New York City is the most populous city in the United States, with an estimated 8,336,817 people living in the city, according to U.S. Census estimates dating July 1, 2019']
52
+
53
+ #Load AutoModel from huggingface model repository
54
+ tokenizer = AutoTokenizer.from_pretrained("model_name")
55
+ model = AutoModel.from_pretrained("model_name")
56
+
57
+ def compute_embeddings(sentences, titles=None):
58
+ #Tokenize sentences
59
+ if titles is not None:
60
+ encoded_input = tokenizer(titles, sentences, padding=True, truncation=True, return_tensors='pt')
61
+ else:
62
+ encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
63
+
64
+ #Compute query embeddings
65
+ with torch.no_grad():
66
+ model_output = model(**encoded_input)
67
+
68
+ #Perform pooling. In this case, mean pooling
69
+ return mean_pooling(model_output, encoded_input['attention_mask'])
70
+
71
+ query_embeddings = compute_embeddings(queries)
72
+ passage_embeddings = compute_embeddings(passages, titles)
73
+ ```
74
+
75
+ ## Performance
76
+ For performance details, see: [SBERT.net - Pre-Trained Models - Natural Questions](https://www.sbert.net/docs/pretrained-models/nq-v1.html)
77
+
78
+ ## Citing & Authors
79
+ If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
80
+ ```
81
+ @inproceedings{reimers-2019-sentence-bert,
82
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
83
+ author = "Reimers, Nils and Gurevych, Iryna",
84
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
85
+ month = "11",
86
+ year = "2019",
87
+ publisher = "Association for Computational Linguistics",
88
+ url = "http://arxiv.org/abs/1908.10084",
89
+ }
90
+ ```
config.json ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/home/ukp-reimers/msmarco/output/distilbert-base-uncased-mined_hard_neg-mean-pooling-no_identifier-epoch10-batchsize75-NTXentLossTriplet-2021-01-08_17-47-51/0_Transformer",
3
+ "activation": "gelu",
4
+ "architectures": [
5
+ "DistilBertModel"
6
+ ],
7
+ "attention_dropout": 0.1,
8
+ "dim": 768,
9
+ "dropout": 0.1,
10
+ "hidden_dim": 3072,
11
+ "initializer_range": 0.02,
12
+ "max_position_embeddings": 512,
13
+ "model_type": "distilbert",
14
+ "n_heads": 12,
15
+ "n_layers": 6,
16
+ "pad_token_id": 0,
17
+ "qa_dropout": 0.1,
18
+ "seq_classif_dropout": 0.2,
19
+ "sinusoidal_pos_embds": false,
20
+ "tie_weights_": true,
21
+ "vocab_size": 30522
22
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f12c8e5381d61116632b98a428a45916e46b5e49350b9f26fb31ccfbfbfcc642
3
+ size 265491187
sentence_bert_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "max_seq_length": 512
3
+ }
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "name_or_path": "/home/ukp-reimers/msmarco/output/distilbert-base-uncased-mined_hard_neg-mean-pooling-no_identifier-epoch10-batchsize75-NTXentLossTriplet-2021-01-08_17-47-51/0_Transformer", "special_tokens_map_file": "/home/ukp-reimers/msmarco/output/distilbert-base-uncased-mined_hard_neg-mean-pooling-no_identifier-epoch10-batchsize75-NTXentLossTriplet-2021-01-08_17-47-51/0_Transformer/special_tokens_map.json", "do_basic_tokenize": true, "never_split": null}
vocab.txt ADDED
The diff for this file is too large to render. See raw diff