|
--- |
|
language: en |
|
license: mit |
|
pipeline_tag: sentence-similarity |
|
tags: |
|
- feature-extraction |
|
- sentence-similarity |
|
- sentence-transformers |
|
--- |
|
|
|
# Multi QA MPNet base model for Semantic Search |
|
|
|
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and was designed for **semantic search**. It has been trained on 215M (question, answer) pairs from diverse sources. |
|
|
|
This model uses [`mpnet-base`](https://huggingface.co/microsoft/mpnet-base). |
|
|
|
## Training Data |
|
We use the concatenation from multiple datasets to fine-tune this model. In total we have about 215M (question, answer) pairs. The model was trained with [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss) using Mean-pooling, cosine-similarity as similarity function, and a scale of 20. |
|
|
|
|
|
| Dataset | Number of training tuples | |
|
|--------------------------------------------------------|:--------------------------:| |
|
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs from WikiAnswers | 77,427,422 | |
|
| [PAQ](https://github.com/facebookresearch/PAQ) Automatically generated (Question, Paragraph) pairs for each paragraph in Wikipedia | 64,371,441 | |
|
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs from all StackExchanges | 25,316,456 | |
|
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs from all StackExchanges | 21,396,559 | |
|
| [MS MARCO](https://microsoft.github.io/msmarco/) Triplets (query, answer, hard_negative) for 500k queries from Bing search engine | 17,579,773 | |
|
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) (query, answer) pairs for 3M Google queries and Google featured snippet | 3,012,496 | |
|
| [Amazon-QA](http://jmcauley.ucsd.edu/data/amazon/qa/) (Question, Answer) pairs from Amazon product pages | 2,448,839 |
|
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) pairs from Yahoo Answers | 1,198,260 | |
|
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) pairs from Yahoo Answers | 681,164 | |
|
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) pairs from Yahoo Answers | 659,896 | |
|
| [SearchQA](https://huggingface.co/datasets/search_qa) (Question, Answer) pairs for 140k questions, each with Top5 Google snippets on that question | 582,261 | |
|
| [ELI5](https://huggingface.co/datasets/eli5) (Question, Answer) pairs from Reddit ELI5 (explainlikeimfive) | 325,475 | |
|
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions pairs (titles) | 304,525 | |
|
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) (Question, Duplicate_Question, Hard_Negative) triplets for Quora Questions Pairs dataset | 103,663 | |
|
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) (Question, Paragraph) pairs for 100k real Google queries with relevant Wikipedia paragraph | 100,231 | |
|
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) (Question, Paragraph) pairs from SQuAD2.0 dataset | 87,599 | |
|
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) (Question, Evidence) pairs | 73,346 | |
|
| **Total** | **214,988,242** | |
|
|
|
## Technical Details |
|
In the following some technical details how this model must be used: |
|
|
|
| Setting | Value | |
|
| --- | :---: | |
|
| Dimensions | 768 | |
|
| Produces normalized embeddings | Yes | |
|
| Pooling-Method | Mean pooling | |
|
| Suitable score functions | dot-product, cosine-similarity, or euclidean distance | |
|
|
|
Note: This model produces normalized embeddings with length 1. In that case, dot-product and cosine-similarity are equivalent. dot-product is preferred as it is faster. Euclidean distance is proportional to dot-product and can also be used. |
|
|
|
## Usage and Performance |
|
The trained model can be used like this: |
|
```python |
|
from sentence_transformers import SentenceTransformer, util |
|
|
|
question = "That is a happy person" |
|
contexts = [ |
|
"That is a happy dog", |
|
"That is a very happy person", |
|
"Today is a sunny day" |
|
] |
|
|
|
# Load the model |
|
model = SentenceTransformer('navteca//multi-qa-mpnet-base-cos-v1') |
|
|
|
# Encode question and contexts |
|
question_emb = model.encode(question) |
|
contexts_emb = model.encode(contexts) |
|
|
|
# Compute dot score between question and all contexts embeddings |
|
result = util.dot_score(question_emb, contexts_emb)[0].cpu().tolist() |
|
|
|
print(result) |
|
|
|
#[ |
|
# 0.60806852579116820, |
|
# 0.94949364662170410, |
|
# 0.29836517572402954 |
|
#] |
|
|