modelId
stringlengths 4
112
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 21
values | files
list | publishedBy
stringlengths 2
37
| downloads_last_month
int32 0
9.44M
| library
stringclasses 15
values | modelCard
large_stringlengths 0
100k
|
---|---|---|---|---|---|---|---|---|
sentence-transformers/ce-roberta-large-stsb | 2021-05-20T20:22:50.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"CECorrelationEvaluator_sts-dev_results.csv",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| sentence-transformers | 121 | transformers | |
sentence-transformers/distilbert-base-nli-stsb-mean-tokens | 2021-06-04T21:48:43.000Z | [
"pytorch",
"distilbert",
"en",
"dataset:stsb",
"transformers",
"feature-extraction"
]
| feature-extraction | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| sentence-transformers | 24,465 | transformers | ---
language: en
tags:
- feature-extraction
datasets:
- stsb
widget:
- text: "Hello, world"
---
|
sentence-transformers/distilbert-base-nli-stsb-quora-ranking | 2020-08-06T08:47:53.000Z | [
"pytorch",
"distilbert",
"transformers"
]
| [
".gitattributes",
"binary_similarity_evaluation_results.csv",
"config.json",
"modules.json",
"paraphrase_mining_evaluation_dev_results.csv",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| sentence-transformers | 165 | transformers | ||
sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking | 2020-08-28T17:57:35.000Z | [
"pytorch",
"distilbert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| sentence-transformers | 918 | transformers | ||
sentence-transformers/msmarco-MiniLM-L-12-v3 | 2021-05-20T05:28:25.000Z | [
"pytorch",
"jax",
"bert",
"arxiv:1908.10084",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| sentence-transformers | 54 | transformers | # Sentence Embedding Model for MS MARCO Passage Retrieval
This a `distilroberta-base` model from the [sentence-transformers](https://github.com/UKPLab/sentence-transformers)-repository. It was trained on the [MS MARCO Passage Retrieval dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking): Given a search query, it finds the relevant passages.
You can use this model for semantic search. Details can be found on: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html).
This model was optimized to be used with **cosine-similarity** as similarity function between queries and documents.
## Training
Details about the training of the models can be found here: [SBERT.net - MS MARCO](https://www.sbert.net/examples/training/ms_marco/README.html)
## Performance
For performance details, see: [SBERT.net - Pre-Trained Models - MS MARCO](https://www.sbert.net/docs/pretrained-models/msmarco-v3.html)
## Usage (HuggingFace Models Repository)
You can use the model directly from the model repository to compute sentence embeddings:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
# Queries we want embeddings for
queries = ['What is the capital of France?', 'How many people live in New York City?']
# Passages that provide answers
passages = ['Paris is the capital of France', 'New York City is the most populous city in the United States, with an estimated 8,336,817 people living in the city, according to U.S. Census estimates dating July 1, 2019']
#Load AutoModel from huggingface model repository
tokenizer = AutoTokenizer.from_pretrained("model_name")
model = AutoModel.from_pretrained("model_name")
def compute_embeddings(sentences):
#Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
#Compute query embeddings
with torch.no_grad():
model_output = model(**encoded_input)
#Perform pooling. In this case, mean pooling
return mean_pooling(model_output, encoded_input['attention_mask'])
query_embeddings = compute_embeddings(queries)
passage_embeddings = compute_embeddings(passages)
```
## Usage (Sentence-Transformers)
Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('model_name')
# Queries we want embeddings for
queries = ['What is the capital of France?', 'How many people live in New York City?']
# Passages that provide answers
passages = ['Paris is the capital of France', 'New York City is the most populous city in the United States, with an estimated 8,336,817 people living in the city, according to U.S. Census estimates dating July 1, 2019']
query_embeddings = model.encode(queries)
passage_embeddings = model.encode(passages)
```
## Changes in v3
The models from v2 have been used for find for all training queries similar passages. An [MS MARCO Cross-Encoder](ce-msmarco.md) based on the electra-base-model has been then used to classify if these retrieved passages answer the question.
If they received a low score by the cross-encoder, we saved them as hard negatives: They got a high score from the bi-encoder, but a low-score from the (better) cross-encoder.
We then trained the v2 models with these new hard negatives.
## Citing & Authors
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
|
sentence-transformers/msmarco-MiniLM-L-6-v3 | 2021-05-20T05:28:50.000Z | [
"pytorch",
"jax",
"bert",
"arxiv:1908.10084",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| sentence-transformers | 100 | transformers | # Sentence Embedding Model for MS MARCO Passage Retrieval
This a `distilroberta-base` model from the [sentence-transformers](https://github.com/UKPLab/sentence-transformers)-repository. It was trained on the [MS MARCO Passage Retrieval dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking): Given a search query, it finds the relevant passages.
You can use this model for semantic search. Details can be found on: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html).
This model was optimized to be used with **cosine-similarity** as similarity function between queries and documents.
## Training
Details about the training of the models can be found here: [SBERT.net - MS MARCO](https://www.sbert.net/examples/training/ms_marco/README.html)
## Performance
For performance details, see: [SBERT.net - Pre-Trained Models - MS MARCO](https://www.sbert.net/docs/pretrained-models/msmarco-v3.html)
## Usage (HuggingFace Models Repository)
You can use the model directly from the model repository to compute sentence embeddings:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
# Queries we want embeddings for
queries = ['What is the capital of France?', 'How many people live in New York City?']
# Passages that provide answers
passages = ['Paris is the capital of France', 'New York City is the most populous city in the United States, with an estimated 8,336,817 people living in the city, according to U.S. Census estimates dating July 1, 2019']
#Load AutoModel from huggingface model repository
tokenizer = AutoTokenizer.from_pretrained("model_name")
model = AutoModel.from_pretrained("model_name")
def compute_embeddings(sentences):
#Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
#Compute query embeddings
with torch.no_grad():
model_output = model(**encoded_input)
#Perform pooling. In this case, mean pooling
return mean_pooling(model_output, encoded_input['attention_mask'])
query_embeddings = compute_embeddings(queries)
passage_embeddings = compute_embeddings(passages)
```
## Usage (Sentence-Transformers)
Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('model_name')
# Queries we want embeddings for
queries = ['What is the capital of France?', 'How many people live in New York City?']
# Passages that provide answers
passages = ['Paris is the capital of France', 'New York City is the most populous city in the United States, with an estimated 8,336,817 people living in the city, according to U.S. Census estimates dating July 1, 2019']
query_embeddings = model.encode(queries)
passage_embeddings = model.encode(passages)
```
## Changes in v3
The models from v2 have been used for find for all training queries similar passages. An [MS MARCO Cross-Encoder](ce-msmarco.md) based on the electra-base-model has been then used to classify if these retrieved passages answer the question.
If they received a low score by the cross-encoder, we saved them as hard negatives: They got a high score from the bi-encoder, but a low-score from the (better) cross-encoder.
We then trained the v2 models with these new hard negatives.
## Citing & Authors
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
|
sentence-transformers/msmarco-distilbert-base-dot-prod-v3 | 2021-04-15T19:14:25.000Z | [
"pytorch",
"distilbert",
"arxiv:1908.10084",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| sentence-transformers | 625 | transformers | # Sentence Embedding Model for MS MARCO Passage Retrieval
This a `distilroberta-base` model from the [sentence-transformers](https://github.com/UKPLab/sentence-transformers)-repository. It was trained on the [MS MARCO Passage Retrieval dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking): Given a search query, it finds the relevant passages.
You can use this model for semantic search. Details can be found on: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html).
This model was optimized to be used with **dot-product** as similarity function between queries and documents.
## Training
Details about the training of the models can be found here: [SBERT.net - MS MARCO](https://www.sbert.net/examples/training/ms_marco/README.html)
## Performance
For performance details, see: [SBERT.net - Pre-Trained Models - MS MARCO](https://www.sbert.net/docs/pretrained-models/msmarco-v3.html)
## Usage (HuggingFace Models Repository)
You can use the model directly from the model repository to compute sentence embeddings:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
# Queries we want embeddings for
queries = ['What is the capital of France?', 'How many people live in New York City?']
# Passages that provide answers
passages = ['Paris is the capital of France', 'New York City is the most populous city in the United States, with an estimated 8,336,817 people living in the city, according to U.S. Census estimates dating July 1, 2019']
#Load AutoModel from huggingface model repository
tokenizer = AutoTokenizer.from_pretrained("model_name")
model = AutoModel.from_pretrained("model_name")
def compute_embeddings(sentences):
#Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
#Compute query embeddings
with torch.no_grad():
model_output = model(**encoded_input)
#Perform pooling. In this case, mean pooling
return mean_pooling(model_output, encoded_input['attention_mask'])
query_embeddings = compute_embeddings(queries)
passage_embeddings = compute_embeddings(passages)
```
## Usage (Sentence-Transformers)
Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('model_name')
# Queries we want embeddings for
queries = ['What is the capital of France?', 'How many people live in New York City?']
# Passages that provide answers
passages = ['Paris is the capital of France', 'New York City is the most populous city in the United States, with an estimated 8,336,817 people living in the city, according to U.S. Census estimates dating July 1, 2019']
query_embeddings = model.encode(queries)
passage_embeddings = model.encode(passages)
```
## Changes in v3
The models from v2 have been used for find for all training queries similar passages. An [MS MARCO Cross-Encoder](ce-msmarco.md) based on the electra-base-model has been then used to classify if these retrieved passages answer the question.
If they received a low score by the cross-encoder, we saved them as hard negatives: They got a high score from the bi-encoder, but a low-score from the (better) cross-encoder.
We then trained the v2 models with these new hard negatives.
## Citing & Authors
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
|
sentence-transformers/msmarco-distilbert-base-v2 | 2021-01-11T20:53:14.000Z | [
"pytorch",
"distilbert",
"arxiv:1908.10084",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| sentence-transformers | 583 | transformers | # Sentence Embedding Model for MS MARCO Passage Retrieval
This a `distilroberta-base` model from the [sentence-transformers](https://github.com/UKPLab/sentence-transformers)-repository. It was trained on the [MS MARCO Passage Retrieval dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking): Given a search query, it finds the relevant passages.
You can use this model for semantic search. Details can be found on: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html) and [SBERT.net - Information Retrieval](https://www.sbert.net/examples/applications/information-retrieval/README.html)
## Training
Details about the training of the models can be found here: [SBERT.net - MS MARCO](https://www.sbert.net/examples/training/ms_marco/README.html)
## Performance
For performance details, see: [SBERT.net - Pre-Trained Models - MS MARCO](https://www.sbert.net/docs/pretrained-models/msmarco-v2.html)
## Usage (HuggingFace Models Repository)
You can use the model directly from the model repository to compute sentence embeddings:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
# Queries we want embeddings for
queries = ['What is the capital of France?', 'How many people live in New York City?']
# Passages that provide answers
passages = ['Paris is the capital of France', 'New York City is the most populous city in the United States, with an estimated 8,336,817 people living in the city, according to U.S. Census estimates dating July 1, 2019']
#Load AutoModel from huggingface model repository
tokenizer = AutoTokenizer.from_pretrained("model_name")
model = AutoModel.from_pretrained("model_name")
def compute_embeddings(sentences):
#Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
#Compute query embeddings
with torch.no_grad():
model_output = model(**encoded_input)
#Perform pooling. In this case, mean pooling
return mean_pooling(model_output, encoded_input['attention_mask'])
query_embeddings = compute_embeddings(queries)
passage_embeddings = compute_embeddings(passages)
```
## Usage (Sentence-Transformers)
Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('model_name')
# Queries we want embeddings for
queries = ['What is the capital of France?', 'How many people live in New York City?']
# Passages that provide answers
passages = ['Paris is the capital of France', 'New York City is the most populous city in the United States, with an estimated 8,336,817 people living in the city, according to U.S. Census estimates dating July 1, 2019']
query_embeddings = model.encode(queries)
passage_embeddings = model.encode(passages)
```
## Citing & Authors
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
|
sentence-transformers/msmarco-distilbert-base-v3 | 2021-03-01T15:03:03.000Z | [
"pytorch",
"distilbert",
"arxiv:1908.10084",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| sentence-transformers | 3,273 | transformers | # Sentence Embedding Model for MS MARCO Passage Retrieval
This a `distilroberta-base` model from the [sentence-transformers](https://github.com/UKPLab/sentence-transformers)-repository. It was trained on the [MS MARCO Passage Retrieval dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking): Given a search query, it finds the relevant passages.
You can use this model for semantic search. Details can be found on: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html) and [SBERT.net - Information Retrieval](https://www.sbert.net/examples/applications/information-retrieval/README.html)
## Training
Details about the training of the models can be found here: [SBERT.net - MS MARCO](https://www.sbert.net/examples/training/ms_marco/README.html)
## Performance
For performance details, see: [SBERT.net - Pre-Trained Models - MS MARCO](https://www.sbert.net/docs/pretrained-models/msmarco-v3.html)
## Usage (HuggingFace Models Repository)
You can use the model directly from the model repository to compute sentence embeddings:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
# Queries we want embeddings for
queries = ['What is the capital of France?', 'How many people live in New York City?']
# Passages that provide answers
passages = ['Paris is the capital of France', 'New York City is the most populous city in the United States, with an estimated 8,336,817 people living in the city, according to U.S. Census estimates dating July 1, 2019']
#Load AutoModel from huggingface model repository
tokenizer = AutoTokenizer.from_pretrained("model_name")
model = AutoModel.from_pretrained("model_name")
def compute_embeddings(sentences):
#Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
#Compute query embeddings
with torch.no_grad():
model_output = model(**encoded_input)
#Perform pooling. In this case, mean pooling
return mean_pooling(model_output, encoded_input['attention_mask'])
query_embeddings = compute_embeddings(queries)
passage_embeddings = compute_embeddings(passages)
```
## Usage (Sentence-Transformers)
Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('model_name')
# Queries we want embeddings for
queries = ['What is the capital of France?', 'How many people live in New York City?']
# Passages that provide answers
passages = ['Paris is the capital of France', 'New York City is the most populous city in the United States, with an estimated 8,336,817 people living in the city, according to U.S. Census estimates dating July 1, 2019']
query_embeddings = model.encode(queries)
passage_embeddings = model.encode(passages)
```
## Changes in v3
The models from v2 have been used for find for all training queries similar passages. An [MS MARCO Cross-Encoder](ce-msmarco.md) based on the electra-base-model has been then used to classify if these retrieved passages answer the question.
If they received a low score by the cross-encoder, we saved them as hard negatives: They got a high score from the bi-encoder, but a low-score from the (better) cross-encoder.
We then trained the v2 models with these new hard negatives.
## Citing & Authors
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
|
sentence-transformers/msmarco-distilroberta-base-v2 | 2021-05-20T20:25:45.000Z | [
"pytorch",
"jax",
"roberta",
"arxiv:1908.10084",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| sentence-transformers | 643 | transformers | # Sentence Embedding Model for MS MARCO Passage Retrieval
This a `distilroberta-base` model from the [sentence-transformers](https://github.com/UKPLab/sentence-transformers)-repository. It was trained on the [MS MARCO Passage Retrieval dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking): Given a search query, it finds the relevant passages.
You can use this model for semantic search. Details can be found on: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html) and [SBERT.net - Information Retrieval](https://www.sbert.net/examples/applications/information-retrieval/README.html)
## Training
Details about the training of the models can be found here: [SBERT.net - MS MARCO](https://www.sbert.net/examples/training/ms_marco/README.html)
## Usage (HuggingFace Models Repository)
You can use the model directly from the model repository to compute sentence embeddings:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
# Queries we want embeddings for
queries = ['What is the capital of France?', 'How many people live in New York City?']
# Passages that provide answers
passages = ['Paris is the capital of France', 'New York City is the most populous city in the United States, with an estimated 8,336,817 people living in the city, according to U.S. Census estimates dating July 1, 2019']
#Load AutoModel from huggingface model repository
tokenizer = AutoTokenizer.from_pretrained("model_name")
model = AutoModel.from_pretrained("model_name")
def compute_embeddings(sentences):
#Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
#Compute query embeddings
with torch.no_grad():
model_output = model(**encoded_input)
#Perform pooling. In this case, mean pooling
return mean_pooling(model_output, encoded_input['attention_mask'])
query_embeddings = compute_embeddings(queries)
passage_embeddings = compute_embeddings(passages)
```
## Usage (Sentence-Transformers)
Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('model_name')
# Queries we want embeddings for
queries = ['What is the capital of France?', 'How many people live in New York City?']
# Passages that provide answers
passages = ['Paris is the capital of France', 'New York City is the most populous city in the United States, with an estimated 8,336,817 people living in the city, according to U.S. Census estimates dating July 1, 2019']
query_embeddings = model.encode(queries)
passage_embeddings = model.encode(passages)
```
## Citing & Authors
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
|
sentence-transformers/msmarco-roberta-base-v2 | 2021-05-20T20:26:52.000Z | [
"pytorch",
"jax",
"roberta",
"arxiv:1908.10084",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| sentence-transformers | 36 | transformers | # Sentence Embedding Model for MS MARCO Passage Retrieval
This a `roberta-base` model from the [sentence-transformers](https://github.com/UKPLab/sentence-transformers)-repository. It was trained on the [MS MARCO Passage Retrieval dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking): Given a search query, it finds the relevant passages.
You can use this model for semantic search. Details can be found on: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html) and [SBERT.net - Information Retrieval](https://www.sbert.net/examples/applications/information-retrieval/README.html)
## Training
Details about the training of the models can be found here: [SBERT.net - MS MARCO](https://www.sbert.net/examples/training/ms_marco/README.html)
## Usage (HuggingFace Models Repository)
You can use the model directly from the model repository to compute sentence embeddings:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
# Queries we want embeddings for
queries = ['What is the capital of France?', 'How many people live in New York City?']
# Passages that provide answers
passages = ['Paris is the capital of France', 'New York City is the most populous city in the United States, with an estimated 8,336,817 people living in the city, according to U.S. Census estimates dating July 1, 2019']
#Load AutoModel from huggingface model repository
tokenizer = AutoTokenizer.from_pretrained("model_name")
model = AutoModel.from_pretrained("model_name")
def compute_embeddings(sentences):
#Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
#Compute query embeddings
with torch.no_grad():
model_output = model(**encoded_input)
#Perform pooling. In this case, mean pooling
return mean_pooling(model_output, encoded_input['attention_mask'])
query_embeddings = compute_embeddings(queries)
passage_embeddings = compute_embeddings(passages)
```
## Usage (Sentence-Transformers)
Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('model_name')
# Queries we want embeddings for
queries = ['What is the capital of France?', 'How many people live in New York City?']
# Passages that provide answers
passages = ['Paris is the capital of France', 'New York City is the most populous city in the United States, with an estimated 8,336,817 people living in the city, according to U.S. Census estimates dating July 1, 2019']
query_embeddings = model.encode(queries)
passage_embeddings = model.encode(passages)
```
## Citing & Authors
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
|
sentence-transformers/nli-distilroberta-base-v2 | 2021-05-20T20:28:03.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
]
| [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| sentence-transformers | 133 | transformers | ||
sentence-transformers/nli-mpnet-base-v2 | 2021-04-30T21:46:55.000Z | [
"pytorch",
"mpnet",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| sentence-transformers | 2,233 | transformers | ||
sentence-transformers/nli-roberta-base-v2 | 2021-05-20T20:28:48.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
]
| [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| sentence-transformers | 520 | transformers | ||
sentence-transformers/nli-roberta-large | 2021-05-20T20:30:52.000Z | [
"pytorch",
"jax",
"roberta",
"arxiv:1908.10084",
"transformers"
]
| [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| sentence-transformers | 551 | transformers | # Sentence Embeddings Models trained on Paraphrases
This model is from the [sentence-transformers](https://github.com/UKPLab/sentence-transformers)-repository. It was trained SNLI + MultiNLI datasets. Further details on SBERT can be found in the paper: [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084)
## Usage (HuggingFace Models Repository)
You can use the model directly from the model repository to compute sentence embeddings:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
#Sentences we want sentence embeddings for
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
#Load AutoModel from huggingface model repository
tokenizer = AutoTokenizer.from_pretrained("model_name")
model = AutoModel.from_pretrained("model_name")
#Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt')
#Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
#Perform pooling. In this case, mean pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
```
## Usage (Sentence-Transformers)
Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('model_name')
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
sentence_embeddings = model.encode(sentences)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Citing & Authors
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
|
sentence-transformers/nq-distilbert-base-v1 | 2021-02-13T20:31:36.000Z | [
"pytorch",
"distilbert",
"arxiv:1908.10084",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| sentence-transformers | 118 | transformers | # Natural Questions Models
[Google's Natural Questions dataset](https://ai.google.com/research/NaturalQuestions) constists of about 100k real search queries from Google with the respective, relevant passage from Wikipedia. Models trained on this dataset work well for question-answer retrieval.
## Usage (Sentence Transformers)
Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
model = SentenceTransformer('nq-distilbert-base-v1')
query_embedding = model.encode('How many people live in London?')
#The passages are encoded as [ [title1, text1], [title2, text2], ...]
passage_embedding = model.encode([['London', 'London has 9,787,426 inhabitants at the 2011 census.']])
print("Similarity:", util.pytorch_cos_sim(query_embedding, passage_embedding))
```
Note: For the passage, we have to encode the Wikipedia article title together with a text paragraph from that article.
## Usage (HuggingFace Models Repository)
You can use the model directly from the model repository to compute sentence embeddings:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
# Queries we want embeddings for
queries = ['What is the capital of France?', 'How many people live in New York City?']
# Passages that provide answers
titles = ['Paris', 'New York City']
passages = ['Paris is the capital of France', 'New York City is the most populous city in the United States, with an estimated 8,336,817 people living in the city, according to U.S. Census estimates dating July 1, 2019']
#Load AutoModel from huggingface model repository
tokenizer = AutoTokenizer.from_pretrained("model_name")
model = AutoModel.from_pretrained("model_name")
def compute_embeddings(sentences, titles=None):
#Tokenize sentences
if titles is not None:
encoded_input = tokenizer(titles, sentences, padding=True, truncation=True, return_tensors='pt')
else:
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
#Compute query embeddings
with torch.no_grad():
model_output = model(**encoded_input)
#Perform pooling. In this case, mean pooling
return mean_pooling(model_output, encoded_input['attention_mask'])
query_embeddings = compute_embeddings(queries)
passage_embeddings = compute_embeddings(passages, titles)
```
## Performance
For performance details, see: [SBERT.net - Pre-Trained Models - Natural Questions](https://www.sbert.net/docs/pretrained-models/nq-v1.html)
## Citing & Authors
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
|
sentence-transformers/paraphrase-MiniLM-L12-v2 | 2021-05-19T12:02:42.000Z | [
"pytorch",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| sentence-transformers | 1,122 | transformers | ||
sentence-transformers/paraphrase-MiniLM-L3-v2 | 2021-05-31T12:35:09.000Z | [
"pytorch",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
]
| sentence-transformers | 224 | transformers | ||
sentence-transformers/paraphrase-MiniLM-L6-v2 | 2021-05-19T19:58:43.000Z | [
"pytorch",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| sentence-transformers | 1,156 | transformers | ||
sentence-transformers/paraphrase-TinyBERT-L6-v2 | 2021-05-28T11:07:53.000Z | [
"pytorch",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
]
| sentence-transformers | 529 | transformers | ||
sentence-transformers/paraphrase-albert-base-v2 | 2021-05-29T19:14:44.000Z | [
"pytorch",
"albert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"spiece.model",
"tokenizer.json",
"tokenizer_config.json"
]
| sentence-transformers | 31 | transformers | ||
sentence-transformers/paraphrase-albert-small-v2 | 2021-05-31T12:34:44.000Z | [
"pytorch",
"albert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"spiece.model",
"tokenizer.json",
"tokenizer_config.json"
]
| sentence-transformers | 33 | transformers | ||
sentence-transformers/paraphrase-distilroberta-base-v1 | 2021-05-20T20:32:01.000Z | [
"pytorch",
"jax",
"roberta",
"arxiv:1908.10084",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| sentence-transformers | 2,734 | transformers | # Sentence Embeddings Models trained on Paraphrases
This model is from the [sentence-transformers](https://github.com/UKPLab/sentence-transformers)-repository. It was trained on millions of paraphrase sentences. Further details on SBERT can be found in the paper: [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084)
## Usage (HuggingFace Models Repository)
You can use the model directly from the model repository to compute sentence embeddings:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
#Sentences we want sentence embeddings for
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
#Load AutoModel from huggingface model repository
tokenizer = AutoTokenizer.from_pretrained("model_name")
model = AutoModel.from_pretrained("model_name")
#Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt')
#Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
#Perform pooling. In this case, mean pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
```
## Usage (Sentence-Transformers)
Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('model_name')
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
sentence_embeddings = model.encode(sentences)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Citing & Authors
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
|
sentence-transformers/paraphrase-distilroberta-base-v2 | 2021-05-20T20:32:50.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
]
| [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| sentence-transformers | 930 | transformers | ||
sentence-transformers/paraphrase-mpnet-base-v2 | 2021-05-20T06:46:11.000Z | [
"pytorch",
"mpnet",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| sentence-transformers | 20,114 | transformers | ||
sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 | 2021-06-09T15:06:27.000Z | [
"pytorch",
"bert",
"sentence-transformers",
"feature-extraction"
]
| feature-extraction | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentence_bert_config.json",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json"
]
| sentence-transformers | 1,066 | sentence-transformers | ---
tags:
- sentence-transformers
- feature-extraction
---
# Paraphrase multilingual MiniLM L12 v2 |
sentence-transformers/paraphrase-multilingual-mpnet-base-v2 | 2021-06-02T11:10:24.000Z | [
"pytorch",
"xlm-roberta",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentence_bert_config.json",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json"
]
| sentence-transformers | 532 | transformers | ||
sentence-transformers/paraphrase-xlm-r-multilingual-v1 | 2021-06-03T08:22:02.000Z | [
"pytorch",
"xlm-roberta",
"arxiv:1908.10084",
"arxiv:2004.09813",
"sentence-transformers",
"feature-extraction"
]
| feature-extraction | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentence_bert_config.json",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json"
]
| sentence-transformers | 1,886,737 | sentence-transformers | ---
tags:
- sentence-transformers
- feature-extraction
---
# Sentence Embeddings Models trained on Paraphrases
This model is from the [sentence-transformers](https://github.com/UKPLab/sentence-transformers)-repository. It was trained on millions of paraphrase sentences. Further details on SBERT can be found in the paper: [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084)
This model is the multilingual version of distilroberta-base-paraphrase-v1, trained on parallel data for 50+ languages.
## Usage (HuggingFace Models Repository)
You can use the model directly from the model repository to compute sentence embeddings:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
#Sentences we want sentence embeddings for
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
#Load AutoModel from huggingface model repository
tokenizer = AutoTokenizer.from_pretrained("model_name")
model = AutoModel.from_pretrained("model_name")
#Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt')
#Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
#Perform pooling. In this case, mean pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
```
## Usage (Sentence-Transformers)
Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('model_name')
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
sentence_embeddings = model.encode(sentences)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Citing & Authors
If you find this model helpful, feel free to cite our publication [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813):
```
@inproceedings{reimers-2020-multilingual-sentence-bert,
title = "Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2020",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2004.09813",
}
``` |
sentence-transformers/quora-distilbert-base | 2021-01-12T09:53:00.000Z | [
"pytorch",
"distilbert",
"arxiv:1908.10084",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| sentence-transformers | 333 | transformers | # Sentence Embeddings Models trained on Duplicate Questions
This model is from the [sentence-transformers](https://github.com/UKPLab/sentence-transformers)-repository. It was trained on the [Quora Duplicate Questions dataset](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs). Further details on SBERT can be found in the paper: [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084)
For more details, see: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)
## Usage (HuggingFace Models Repository)
You can use the model directly from the model repository to compute sentence embeddings:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
#Sentences we want sentence embeddings for
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
#Load AutoModel from huggingface model repository
tokenizer = AutoTokenizer.from_pretrained("model_name")
model = AutoModel.from_pretrained("model_name")
#Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt')
#Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
#Perform pooling. In this case, mean pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
```
## Usage (Sentence-Transformers)
Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('model_name')
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
sentence_embeddings = model.encode(sentences)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Citing & Authors
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
|
sentence-transformers/quora-distilbert-multilingual | 2021-01-12T09:57:01.000Z | [
"pytorch",
"distilbert",
"arxiv:1908.10084",
"arxiv:2004.09813",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| sentence-transformers | 1,527 | transformers | # Sentence Embeddings Models trained on Duplicate Questions
This model is from the [sentence-transformers](https://github.com/UKPLab/sentence-transformers)-repository. It was trained on the [Quora Duplicate Questions dataset](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs). Further details on SBERT can be found in the paper: [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084)
For more details, see: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)
This model is the multilingual version of quora-distilbert-base, trained on parallel data for 50+ languages.
## Usage (HuggingFace Models Repository)
You can use the model directly from the model repository to compute sentence embeddings:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
#Sentences we want sentence embeddings for
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
#Load AutoModel from huggingface model repository
tokenizer = AutoTokenizer.from_pretrained("model_name")
model = AutoModel.from_pretrained("model_name")
#Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt')
#Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
#Perform pooling. In this case, mean pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
```
## Usage (Sentence-Transformers)
Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('model_name')
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
sentence_embeddings = model.encode(sentences)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Citing & Authors
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
```
and for the multilingual models:
If you find this model helpful, feel free to cite our publication [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813):
```
@inproceedings{reimers-2020-multilingual-sentence-bert,
title = "Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2020",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2004.09813",
}
``` |
|
sentence-transformers/roberta-base-nli-stsb-mean-tokens | 2021-05-20T20:33:57.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
]
| [
".gitattributes",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"sentence_roberta_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| sentence-transformers | 3,944 | transformers | ||
sentence-transformers/roberta-large-nli-stsb-mean-tokens | 2021-05-20T20:36:27.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
]
| [
".gitattributes",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"sentence_roberta_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| sentence-transformers | 522 | transformers | ||
sentence-transformers/stsb-bert-large | 2021-01-11T20:32:50.000Z | [
"pytorch",
"arxiv:1908.10084",
"transformers"
]
| [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"vocab.txt"
]
| sentence-transformers | 191 | transformers | # Sentence Embeddings Models trained on Paraphrases
This model is from the [sentence-transformers](https://github.com/UKPLab/sentence-transformers)-repository. It was trained on SNLI + MultiNLI and on STS benchmark dataset. Further details on SBERT can be found in the paper: [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084)
## Usage (HuggingFace Models Repository)
You can use the model directly from the model repository to compute sentence embeddings:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
#Sentences we want sentence embeddings for
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
#Load AutoModel from huggingface model repository
tokenizer = AutoTokenizer.from_pretrained("model_name")
model = AutoModel.from_pretrained("model_name")
#Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt')
#Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
#Perform pooling. In this case, mean pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
```
## Usage (Sentence-Transformers)
Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('model_name')
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
sentence_embeddings = model.encode(sentences)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Citing & Authors
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
|
sentence-transformers/stsb-distilbert-base | 2021-01-11T20:37:25.000Z | [
"pytorch",
"distilbert",
"arxiv:1908.10084",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| sentence-transformers | 1,959 | transformers | # Sentence Embeddings Models trained on Paraphrases
This model is from the [sentence-transformers](https://github.com/UKPLab/sentence-transformers)-repository. It was trained on SNLI + MultiNLI and on STS benchmark dataset. Further details on SBERT can be found in the paper: [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084)
## Usage (HuggingFace Models Repository)
You can use the model directly from the model repository to compute sentence embeddings:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
#Sentences we want sentence embeddings for
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
#Load AutoModel from huggingface model repository
tokenizer = AutoTokenizer.from_pretrained("model_name")
model = AutoModel.from_pretrained("model_name")
#Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt')
#Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
#Perform pooling. In this case, mean pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
```
## Usage (Sentence-Transformers)
Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('model_name')
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
sentence_embeddings = model.encode(sentences)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Citing & Authors
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
|
sentence-transformers/stsb-distilroberta-base-v2 | 2021-05-20T20:37:53.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
]
| [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| sentence-transformers | 1,087 | transformers | ||
sentence-transformers/stsb-mpnet-base-v2 | 2021-04-30T21:55:34.000Z | [
"pytorch",
"mpnet",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| sentence-transformers | 2,323 | transformers | ||
sentence-transformers/stsb-roberta-base-v2 | 2021-05-20T20:38:47.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
]
| [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| sentence-transformers | 872 | transformers | ||
sentence-transformers/stsb-roberta-large | 2021-05-20T20:41:37.000Z | [
"pytorch",
"jax",
"roberta",
"arxiv:1908.10084",
"transformers"
]
| [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| sentence-transformers | 1,391 | transformers | # Sentence Embeddings Models trained on Paraphrases
This model is from the [sentence-transformers](https://github.com/UKPLab/sentence-transformers)-repository. It was trained on SNLI + MultiNLI and on STS benchmark dataset. Further details on SBERT can be found in the paper: [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084)
## Usage (HuggingFace Models Repository)
You can use the model directly from the model repository to compute sentence embeddings:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
#Sentences we want sentence embeddings for
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
#Load AutoModel from huggingface model repository
tokenizer = AutoTokenizer.from_pretrained("model_name")
model = AutoModel.from_pretrained("model_name")
#Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt')
#Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
#Perform pooling. In this case, mean pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
```
## Usage (Sentence-Transformers)
Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('model_name')
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
sentence_embeddings = model.encode(sentences)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Citing & Authors
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
|
sentence-transformers/stsb-xlm-r-multilingual | 2021-05-28T12:21:10.000Z | [
"pytorch",
"xlm-roberta",
"arxiv:1908.10084",
"arxiv:2004.09813",
"transformers",
"sentence_transformers",
"feature-extraction"
]
| feature-extraction | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentence_bert_config.json",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json"
]
| sentence-transformers | 9,852 | transformers | ---
tags:
- sentence_transformers
- feature-extraction
---
# Sentence Embeddings Models trained on Paraphrases
This model is from the [sentence-transformers](https://github.com/UKPLab/sentence-transformers)-repository. It was trained on SNLI + MultiNLI and on STS benchmark dataset. Further details on SBERT can be found in the paper: [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084)
This model is multilingual version, it was trained on parallel data for 50+ languages.
For more details, see: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)
## Usage (HuggingFace Models Repository)
You can use the model directly from the model repository to compute sentence embeddings:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
#Sentences we want sentence embeddings for
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
#Load AutoModel from huggingface model repository
tokenizer = AutoTokenizer.from_pretrained("model_name")
model = AutoModel.from_pretrained("model_name")
#Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt')
#Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
#Perform pooling. In this case, mean pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
```
## Usage (Sentence-Transformers)
Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('model_name')
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
sentence_embeddings = model.encode(sentences)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Citing & Authors
If you find this model helpful, feel free to cite our publication [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813):
```
@inproceedings{reimers-2020-multilingual-sentence-bert,
title = "Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2020",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2004.09813",
}
``` |
sentence-transformers/xlm-r-100langs-bert-base-nli-mean-tokens | 2020-08-28T17:53:45.000Z | [
"pytorch",
"xlm-roberta",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentence_bert_config.json",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json"
]
| sentence-transformers | 217 | transformers | ||
sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens | 2020-08-28T17:55:46.000Z | [
"pytorch",
"xlm-roberta",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentence_bert_config.json",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json"
]
| sentence-transformers | 6,877 | transformers | ||
seonghwayun/test-model | 2021-04-28T04:04:31.000Z | []
| [
".gitattributes"
]
| seonghwayun | 0 | |||
serdarakyol/interpress-turkish-news-classification | 2021-05-20T05:29:35.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"tr",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| serdarakyol | 36 | transformers | ---
language: tr
Dataset: interpress_news_category_tr
---
# INTERPRESS NEWS CLASSIFICATION
## Dataset
The dataset downloaded from interpress. This dataset is real world data. Actually there are 273K data but I filtered them and used 108K data for this model. For more information about dataset please visit this [link](https://huggingface.co/datasets/interpress_news_category_tr_lite)
## Model
Model accuracy on train data and validation data is %97.
## Usage for Torch
```sh
pip install transformers or pip install transformers==4.3.3
```
```sh
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("serdarakyol/interpress-turkish-news-classification")
model = AutoModelForSequenceClassification.from_pretrained("serdarakyol/interpress-turkish-news-classification")
```
```sh
import torch
import numpy as np
if torch.cuda.is_available():
device = torch.device("cuda")
model = model.cuda()
print('There are %d GPU(s) available.' % torch.cuda.device_count())
print('GPU name is:', torch.cuda.get_device_name(0))
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
```
```sh
def prediction(news):
news=[news]
indices=tokenizer.batch_encode_plus(
news,
max_length=512,
add_special_tokens=True,
return_attention_mask=True,
padding='max_length',
truncation=True,
return_tensors='pt')
inputs = indices["input_ids"].clone().detach().to(device)
masks = indices["attention_mask"].clone().detach().to(device)
with torch.no_grad():
output = model(inputs, token_type_ids=None,attention_mask=masks)
logits = output[0]
logits = logits.detach().cpu().numpy()
pred = np.argmax(logits,axis=1)[0]
return pred
```
```sh
news = r"ABD'den Prens Selman'a yaptırım yok Beyaz Saray Sözcüsü Psaki, Muhammed bin Selman'a yaptırım uygulamamanın \"doğru karar\" olduğunu savundu. Psaki, \"Tarihimizde, Demokrat ve Cumhuriyetçi başkanların yönetimlerinde diplomatik ilişki içinde olduğumuz ülkelerin liderlerine yönelik yaptırım getirilmemiştir\" dedi."
```
You can find the news in this [link](https://www.ntv.com.tr/dunya/abdden-prens-selmana-yaptirim-yok,YTeWNv0-oU6Glbhnpjs1JQ) (news date: 02/03/2021)
```sh
labels = {
0 : "Culture-Art",
1 : "Economy",
2 : "Politics",
3 : "Education",
4 : "World",
5 : "Sport",
6 : "Technology",
7 : "Magazine",
8 : "Health",
9 : "Agenda"
}
pred = prediction(news)
print(labels[pred])
# > World
```
## Usage for Tensorflow
```sh
pip install transformers or pip install transformers==4.3.3
import tensorflow as tf
from transformers import BertTokenizer, TFBertForSequenceClassification
import numpy as np
tokenizer = BertTokenizer.from_pretrained('serdarakyol/interpress-turkish-news-classification')
model = TFBertForSequenceClassification.from_pretrained("serdarakyol/interpress-turkish-news-classification")
inputs = tokenizer(news, return_tensors="tf")
inputs["labels"] = tf.reshape(tf.constant(1), (-1, 1)) # Batch size 1
outputs = model(inputs)
loss = outputs.loss
logits = outputs.logits
pred = np.argmax(logits,axis=1)[0]
labels[pred]
# > World
```
Thanks to [@yavuzkomecoglu](https://huggingface.co/yavuzkomecoglu) for contributes
If you have any question, please, don't hesitate to contact with me
[](https://www.linkedin.com/in/serdarakyol55/)
[](https://github.com/serdarakyol) |
sergiyvl/ParaPhraserPlus_1epoch | 2021-05-20T05:30:51.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| sergiyvl | 21 | transformers | |
sergiyvl/first_try_RuBERT_200_16_16_10ep | 2021-05-20T05:34:45.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| sergiyvl | 16 | transformers | |
sergiyvl/first_try_RuBERT_200_16_16_25ep | 2021-05-20T05:36:09.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| sergiyvl | 12 | transformers | |
sergiyvl/just_first_try_to_my_diplom_onBert | 2021-05-20T05:37:44.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| sergiyvl | 11 | transformers | |
sergiyvl/just_first_try_to_my_diplom_onBert_10epoch | 2021-05-20T05:38:45.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| sergiyvl | 8 | transformers | |
sergiyvl/just_first_try_to_my_diplom_onBert_minea_2epoch | 2021-05-20T05:39:53.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| sergiyvl | 8 | transformers | |
sergiyvl/model_65000_20ep | 2021-05-20T05:41:04.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| sergiyvl | 231 | transformers | |
sergunow/rick-sanchez-blenderbot-400-distill | 2021-06-17T22:33:55.000Z | [
"pytorch",
"blenderbot",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"readme.md",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| sergunow | 25 | transformers | |
setu4993/LaBSE | 2021-05-23T00:34:05.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"bo",
"bs",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"haw",
"he",
"hi",
"hmn",
"hr",
"ht",
"hu",
"hy",
"id",
"ig",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lb",
"lo",
"lt",
"lv",
"mg",
"mi",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"ne",
"nl",
"no",
"ny",
"or",
"pa",
"pl",
"pt",
"ro",
"ru",
"rw",
"si",
"sk",
"sl",
"sm",
"sn",
"so",
"sq",
"sr",
"st",
"su",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tk",
"tl",
"tr",
"tt",
"ug",
"uk",
"ur",
"uz",
"vi",
"wo",
"xh",
"yi",
"yo",
"zh",
"zu",
"dataset:CommonCrawl",
"dataset:Wikipedia",
"arxiv:2007.01852",
"transformers",
"sentence_embedding",
"multilingual",
"google",
"license:apache-2.0"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
]
| setu4993 | 2,121 | transformers | ---
language:
- af
- am
- ar
- as
- az
- be
- bg
- bn
- bo
- bs
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- he
- hi
- hmn
- hr
- ht
- hu
- hy
- id
- ig
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- or
- pa
- pl
- pt
- ro
- ru
- rw
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tk
- tl
- tr
- tt
- ug
- uk
- ur
- uz
- vi
- wo
- xh
- yi
- yo
- zh
- zu
tags:
- bert
- sentence_embedding
- multilingual
- google
license: Apache-2.0
datasets:
- CommonCrawl
- Wikipedia
---
# LaBSE
## Model description
Language-agnostic BERT Sentence Encoder (LaBSE) is a BERT-based model trained for sentence embedding for 109 languages. The pre-training process combines masked language modeling with translation language modeling. The model is useful for getting multilingual sentence embeddings and for bi-text retrieval.
- Model: [HuggingFace's model hub](https://huggingface.co/setu4993/LaBSE).
- Paper: [arXiv](https://arxiv.org/abs/2007.01852).
- Original model: [TensorFlow Hub](https://tfhub.dev/google/LaBSE/1).
- Blog post: [Google AI Blog](https://ai.googleblog.com/2020/08/language-agnostic-bert-sentence.html).
## Usage
Using the model:
```python
import torch
from transformers import BertModel, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("setu4993/LaBSE")
model = BertModel.from_pretrained("setu4993/LaBSE")
model = model.eval()
english_sentences = [
"dog",
"Puppies are nice.",
"I enjoy taking long walks along the beach with my dog.",
]
english_inputs = tokenizer(english_sentences, return_tensors="pt", padding=True)
with torch.no_grad():
english_outputs = model(**english_inputs)
```
To get the sentence embeddings, use the pooler output:
```python
english_embeddings = english_outputs.pooler_output
```
Output for other languages:
```python
italian_sentences = [
"cane",
"I cuccioli sono carini.",
"Mi piace fare lunghe passeggiate lungo la spiaggia con il mio cane.",
]
japanese_sentences = ["犬", "子犬はいいです", "私は犬と一緒にビーチを散歩するのが好きです"]
italian_inputs = tokenizer(italian_sentences, return_tensors="pt", padding=True)
japanese_inputs = tokenizer(japanese_sentences, return_tensors="pt", padding=True)
with torch.no_grad():
italian_outputs = model(**italian_inputs)
japanese_outputs = model(**japanese_inputs)
italian_embeddings = italian_outputs.pooler_output
japanese_embeddings = japanese_outputs.pooler_output
```
For similarity between sentences, an L2-norm is recommended before calculating the similarity:
```python
import torch.nn.functional as F
def similarity(embeddings_1, embeddings_2):
normalized_embeddings_1 = F.normalize(embeddings_1, p=2)
normalized_embeddings_2 = F.normalize(embeddings_2, p=2)
return torch.matmul(
normalized_embeddings_1, normalized_embeddings_2.transpose(0, 1)
)
print(similarity(english_embeddings, italian_embeddings))
print(similarity(english_embeddings, japanese_embeddings))
print(similarity(italian_embeddings, japanese_embeddings))
```
## Details
Details about data, training, evaluation and performance metrics are available in the [original paper](https://arxiv.org/abs/2007.01852).
### BibTeX entry and citation info
```bibtex
@misc{feng2020languageagnostic,
title={Language-agnostic BERT Sentence Embedding},
author={Fangxiaoyu Feng and Yinfei Yang and Daniel Cer and Naveen Arivazhagan and Wei Wang},
year={2020},
eprint={2007.01852},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
|
severinsimmler/bert-adapted-german-press | 2021-05-20T05:44:48.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| severinsimmler | 18 | transformers | ||
severinsimmler/german-press-bert | 2021-05-20T05:46:27.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| severinsimmler | 27 | transformers | |
severinsimmler/literary-german-bert | 2021-05-20T05:47:20.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"de",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"kfold.png",
"prosa-jahre.png",
"pytorch_model.bin",
"special_tokens_map.json",
"test_results.txt",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| severinsimmler | 131 | transformers | ---
language: de
thumbnail: https://huggingface.co/severinsimmler/literary-german-bert/raw/main/kfold.png
---
# German BERT for literary texts
This German BERT is based on `bert-base-german-dbmdz-cased`, and has been adapted to the domain of literary texts by fine-tuning the language modeling task on the [Corpus of German-Language Fiction](https://figshare.com/articles/Corpus_of_German-Language_Fiction_txt_/4524680/1). Afterwards the model was fine-tuned for named entity recognition on the [DROC](https://gitlab2.informatik.uni-wuerzburg.de/kallimachos/DROC-Release) corpus, so you can use it to recognize protagonists in German novels.
# Stats
## Language modeling
The [Corpus of German-Language Fiction](https://figshare.com/articles/Corpus_of_German-Language_Fiction_txt_/4524680/1) consists of 3,194 documents with 203,516,988 tokens or 1,520,855 types. The publication year of the texts ranges from the 18th to the 20th century:

### Results
After one epoch:
| Model | Perplexity |
| ---------------- | ---------- |
| Vanilla BERT | 6.82 |
| Fine-tuned BERT | 4.98 |
## Named entity recognition
The provided model was also fine-tuned for two epochs on 10,799 sentences for training, validated on 547 and tested on 1,845 with three labels: `B-PER`, `I-PER` and `O`.
## Results
| Dataset | Precision | Recall | F1 |
| ------- | --------- | ------ | ---- |
| Dev | 96.4 | 87.3 | 91.6 |
| Test | 92.8 | 94.9 | 93.8 |
The model has also been evaluated using 10-fold cross validation and compared with a classic Conditional Random Field baseline described in [Jannidis et al.](https://opus.bibliothek.uni-wuerzburg.de/opus4-wuerzburg/frontdoor/deliver/index/docId/14333/file/Jannidis_Figurenerkennung_Roman.pdf) (2015):

# References
Markus Krug, Lukas Weimer, Isabella Reger, Luisa Macharowsky, Stephan Feldhaus, Frank Puppe, Fotis Jannidis, [Description of a Corpus of Character References in German Novels](http://webdoc.sub.gwdg.de/pub/mon/dariah-de/dwp-2018-27.pdf), 2018.
Fotis Jannidis, Isabella Reger, Lukas Weimer, Markus Krug, Martin Toepfer, Frank Puppe, [Automatische Erkennung von Figuren in deutschsprachigen Romanen](https://opus.bibliothek.uni-wuerzburg.de/opus4-wuerzburg/frontdoor/deliver/index/docId/14333/file/Jannidis_Figurenerkennung_Roman.pdf), 2015.
|
severo/autonlp-sentiment_detection-1781580 | 2021-06-18T18:20:55.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:severo/autonlp-data-sentiment_detection-3c8bcd36",
"transformers",
"autonlp"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sample_input.pkl",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
]
| severo | 5 | transformers | |
seyonec/BPE_SELFIES_PubChem_shard00_120k | 2021-05-20T20:44:11.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| seyonec | 21 | transformers | |
seyonec/BPE_SELFIES_PubChem_shard00_150k | 2021-05-20T20:44:59.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| seyonec | 15 | transformers | |
seyonec/BPE_SELFIES_PubChem_shard00_160k | 2021-05-20T20:46:05.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| seyonec | 20 | transformers | |
seyonec/BPE_SELFIES_PubChem_shard00_166_5k | 2021-05-20T20:46:50.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| seyonec | 15 | transformers | |
seyonec/BPE_SELFIES_PubChem_shard00_50k | 2021-05-20T20:48:07.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| seyonec | 9 | transformers | |
seyonec/BPE_SELFIES_PubChem_shard00_70k | 2021-05-20T20:49:05.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| seyonec | 15 | transformers | |
seyonec/ChemBERTA_PubChem1M_shard00 | 2021-05-20T20:50:55.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| seyonec | 16 | transformers | |
seyonec/ChemBERTA_PubChem1M_shard00_115k | 2021-05-20T20:51:44.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| seyonec | 21 | transformers | |
seyonec/ChemBERTA_PubChem1M_shard00_125k | 2021-05-20T20:52:31.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| seyonec | 14 | transformers | |
seyonec/ChemBERTA_PubChem1M_shard00_140k | 2021-05-20T20:53:19.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| seyonec | 28 | transformers | |
seyonec/ChemBERTA_PubChem1M_shard00_155k | 2021-05-20T20:54:07.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| seyonec | 79 | transformers | |
seyonec/ChemBERTA_PubChem1M_shard00_75k | 2021-05-20T20:54:57.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| seyonec | 10 | transformers | |
seyonec/ChemBERTa-zinc-base-v1 | 2021-05-20T20:55:33.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"chemistry",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| seyonec | 1,006 | transformers | ---
tags:
- chemistry
---
# ChemBERTa: Training a BERT-like transformer model for masked language modelling of chemical SMILES strings.
Deep learning for chemistry and materials science remains a novel field with lots of potiential. However, the popularity of transfer learning based methods in areas such as NLP and computer vision have not yet been effectively developed in computational chemistry + machine learning. Using HuggingFace's suite of models and the ByteLevel tokenizer, we are able to train on a large corpus of 100k SMILES strings from a commonly known benchmark dataset, ZINC.
Training RoBERTa over 5 epochs, the model achieves a decent loss of 0.398, but may likely continue to decline if trained for a larger number of epochs. The model can predict tokens within a SMILES sequence/molecule, allowing for variants of a molecule within discoverable chemical space to be predicted.
By applying the representations of functional groups and atoms learned by the model, we can try to tackle problems of toxicity, solubility, drug-likeness, and synthesis accessibility on smaller datasets using the learned representations as features for graph convolution and attention models on the graph structure of molecules, as well as fine-tuning of BERT. Finally, we propose the use of attention visualization as a helpful tool for chemistry practitioners and students to quickly identify important substructures in various chemical properties.
Additionally, visualization of the attention mechanism have been seen through previous research as incredibly valuable towards chemical reaction classification. The applications of open-sourcing large-scale transformer models such as RoBERTa with HuggingFace may allow for the acceleration of these individual research directions.
A link to a repository which includes the training, uploading and evaluation notebook (with sample predictions on compounds such as Remdesivir) can be found [here](https://github.com/seyonechithrananda/bert-loves-chemistry). All of the notebooks can be copied into a new Colab runtime for easy execution.
Thanks for checking this out!
- Seyone
|
seyonec/ChemBERTa-zinc250k-v1 | 2021-05-20T20:56:13.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| seyonec | 31 | transformers | |
seyonec/ChemBERTa_masked_30_PubChem_shard00_1M_150k_steps | 2021-05-20T20:56:58.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| seyonec | 28 | transformers | |
seyonec/ChemBERTa_zinc250k_v2_40k | 2021-05-20T20:57:42.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| seyonec | 96 | transformers | |
seyonec/PubChem10M_SMILES_BPE_120k | 2021-05-20T20:58:35.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| seyonec | 9 | transformers | |
seyonec/PubChem10M_SMILES_BPE_180k | 2021-05-20T20:59:23.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| seyonec | 1,460 | transformers | |
seyonec/PubChem10M_SMILES_BPE_240k | 2021-05-20T21:00:08.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| seyonec | 18 | transformers | |
seyonec/PubChem10M_SMILES_BPE_390k | 2021-05-20T21:00:52.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| seyonec | 19 | transformers | |
seyonec/PubChem10M_SMILES_BPE_396_250 | 2021-05-20T21:01:53.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| seyonec | 1,583 | transformers | |
seyonec/PubChem10M_SMILES_BPE_450k | 2021-05-20T21:02:39.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| seyonec | 3,837 | transformers | |
seyonec/PubChem10M_SMILES_BPE_50k | 2021-05-20T21:03:24.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| seyonec | 15 | transformers | |
seyonec/PubChem10M_SMILES_BPE_60k | 2021-05-20T21:04:12.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| seyonec | 14 | transformers | |
seyonec/SMILES_BPE_PubChem_100k_shard00 | 2021-05-20T21:05:05.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"training_args.bin",
"vocab.json"
]
| seyonec | 13 | transformers | |
seyonec/SMILES_BPE_PubChem_250k | 2021-05-20T21:06:00.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"training_args.bin",
"vocab.json"
]
| seyonec | 21 | transformers | |
seyonec/SMILES_tokenized_PubChem_shard00_100k | 2021-05-20T21:06:51.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| seyonec | 14 | transformers | |
seyonec/SMILES_tokenized_PubChem_shard00_150k | 2021-05-20T21:07:44.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| seyonec | 21 | transformers | |
seyonec/SMILES_tokenized_PubChem_shard00_160k | 2021-05-20T21:08:23.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json",
"vocab.txt"
]
| seyonec | 672 | transformers | |
seyonec/SMILES_tokenized_PubChem_shard00_40k | 2021-05-20T21:09:40.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| seyonec | 15 | transformers | |
seyonec/SMILES_tokenized_PubChem_shard00_50k | 2021-05-20T21:10:29.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| seyonec | 463 | transformers | |
seyonec/SmilesTokenizer_ChemBERTa_zinc250k_40k | 2021-05-20T21:11:20.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| seyonec | 31 | transformers | |
seyonec/checkpoint-50000 | 2021-05-20T21:12:19.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"training_args.bin",
"vocab.json"
]
| seyonec | 30 | transformers | |
sgich/bert_case_uncased_KenyaHateSpeech | 2021-04-25T19:12:40.000Z | [
"pytorch",
"text-classification",
"pipeline_tag:text-classification"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json.txt",
"vocab.txt"
]
| sgich | 0 | # HateSpeechDetection
---
pipeline_tag: text-classification
---
The model is used for classifying a text as Hatespeech or Normal. The model is trained using data from Twitter, specifically Kenyan related tweets. To maximize on the limited dataset, text augmentation was done.
The dataset is available here: https://github.com/sgich/HateSpeechDetection
Using a pre-trained "bert-base-uncased" transformer model, adding a dropout layer, a linear output layer and adding 10 common emojis that may be related to either Hate or Normal Speech. Then the model was tuned on a dataset of Kenyan/Kenyan-related scraped tweets with the purpose of performing text classification of "Normal Speech" or "Hate Speech" based on the text. This model was the result of realizing that majority of similar models did not cater for the African context where the target groups are not based on race and/or religious affiliation but mostly tribal differences which has proved fatal in the past.
The model can be improved greatly by using a large and representative dataset and optimization of the model to a better degree.
|
|
sgk/test_model | 2021-04-21T05:56:32.000Z | []
| [
".gitattributes"
]
| sgk | 0 | |||
sgugger/dummy-model | 2021-06-09T00:04:27.000Z | [
"tf",
"camembert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json"
]
| sgugger | 1 | transformers | |
sgugger/finetuned-bert | 2021-06-15T14:40:13.000Z | [
"pytorch",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"license:apache-2.0"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
]
| sgugger | 22 | transformers | ---
license: apache-2.0
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: finetuned-bert
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8651960784313726
- name: F1
type: f1
value: 0.9050086355785838
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-bert
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3898
- Accuracy: 0.8652
- F1: 0.9050
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5549 | 1.0 | 230 | 0.3806 | 0.8333 | 0.8870 |
| 0.3343 | 2.0 | 460 | 0.3620 | 0.8382 | 0.8896 |
| 0.181 | 3.0 | 690 | 0.3898 | 0.8652 | 0.9050 |
### Framework versions
- Transformers 4.7.0.dev0
- Pytorch 1.8.1+cu111
- Datasets 1.8.0.dev0
- Tokenizers 0.10.1
|
sgugger/funnel-random-tiny | 2021-04-08T19:31:32.000Z | [
"pytorch",
"tf",
"funnel",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tf_model.h5"
]
| sgugger | 7,899 | transformers | ||
sgugger/resnet50d | 2021-02-18T10:31:48.000Z | [
"pytorch",
"dataset:imagenet",
"arxiv:1512.03385",
"arxiv:1812.01187",
"arxiv:1906.02659",
"arxiv:2010.15052",
"image-classification",
"timm",
"resnet",
"license:apache-2.0"
]
| image-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin"
]
| sgugger | 10 | timm | ---
tags:
- image-classification
- timm
- resnet
license: apache-2.0
datasets:
- imagenet
---
# ResNet-50d
Pretrained model on [ImageNet](http://www.image-net.org/). The ResNet architecture was introduced in
[this paper](https://arxiv.org/abs/1512.03385) and is adapted with the ResNet-D trick from
[this paper](https://arxiv.org/abs/1812.01187)
## Model description
ResNet are deep convolutional neural networks using residual connections. Each layer is composed of two convolutions
with a ReLU in the middle, but the output is the sum of the input with the output of the convolutional blocks.

This way, there is a direct connection from the original inputs to even the deepest layers in the network.
## Intended uses & limitations
You can use the raw model to classify images along the 1,000 ImageNet labels, but you can also change its head
to fine-tune it on a downstream task (another classification task with different labels, image segmentation or
object detection, to name a few).
### How to use
You can use this model with the usual factory method in `timm`:
```python
import PIL
import timm
import torch
model = timm.create_model("sgugger/resnet50d")
img = PIL.Image.open(path_to_an_image)
img = img.convert("RGB")
config = model.default_cfg
if isinstance(config["input_size"], tuple):
img_size = config["input_size"][-2:]
else:
img_size = config["input_size"]
transform = timm.data.transforms_factory.transforms_imagenet_eval(
img_size=img_size,
interpolation=config["interpolation"],
mean=config["mean"],
std=config["std"],
)
input_tensor = transform(cat_img)
input_tensor = input_tensor.unsqueeze(0)
# ^ batch size = 1
with torch.no_grad():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
### Limitations and bias
The training images in the dataset are usually photos clearly representing one of the 1,000 labels. The model will
probably not generalize well on drawings or images containing multiple objects with different labels.
The training images in the dataset come mostly from the US (45.4%) and Great Britain (7.6%). As such the model or
models created by fine-tuning this model will work better on images picturing scenes from these countries (see
[this paper](https://arxiv.org/abs/1906.02659) for examples).
More generally, [recent research](https://arxiv.org/abs/2010.15052) has shown that even models trained in an
unsupervised fashion on ImageNet (i.e. without using the labels) will pick up racial and gender bias represented in
the training images.
## Training data
This model was pretrained on [ImageNet](http://www.image-net.org/), a dataset consisting of 14 millions of
hand-annotated images with 1,000 categories.
## Training procedure
To be completed
### Preprocessing
The images are resized using bicubic interpolation to 224x224 and normalized with the usual ImageNet statistics.
## Evaluation results
This model has a top1-accuracy of 80.53% and a top-5 accuracy of 95.16% in the evaluation set of ImageNet
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/HeZRS15,
author = {Kaiming He and
Xiangyu Zhang and
Shaoqing Ren and
Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {CoRR},
volume = {abs/1512.03385},
year = {2015},
url = {http://arxiv.org/abs/1512.03385},
archivePrefix = {arXiv},
eprint = {1512.03385},
timestamp = {Wed, 17 Apr 2019 17:23:45 +0200},
biburl = {https://dblp.org/rec/journals/corr/HeZRS15.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
sh/test | 2021-04-26T08:17:59.000Z | []
| [
".gitattributes"
]
| sh | 0 | |||
shahrukhx01/bert-mini-finetune-question-detection | 2021-06-02T05:55:27.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| shahrukhx01 | 409 | transformers | # KEYWORD QUERY VS STATEMENT/QUESTION CLASSIFIER FOR NEURAL SEARCH
| Train Loss | Validation Acc.| Test Acc.|
| ------------- |:-------------: | -----: |
| 0.000806 | 0.99 | 0.997 |
# USAGE
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("shahrukhx01/bert-mini-finetune-question-detection")
model = AutoModelForSequenceClassification.from_pretrained("shahrukhx01/bert-mini-finetune-question-detection")
```
Trained to add feature for classifying queries between Keyword Query or Question + Statement Query using classification in [Haystack](https://github.com/deepset-ai/haystack/issues/611)
Problem Statement:
One common challenge that we saw in deployments: We need to distinguish between real questions and keyword queries that come in. We only want to route questions to the Reader branch in order to maximize the accuracy of results and minimize computation efforts/costs.
Baseline:
https://www.kaggle.com/shahrukhkhan/question-v-statement-detection
Dataset:
https://www.kaggle.com/stefanondisponibile/quora-question-keyword-pairs
Kaggle Notebook:
https://www.kaggle.com/shahrukhkhan/question-vs-statement-classification-mini-bert/
|
shahrukhx01/question-vs-statement-classifier | 2021-06-02T05:55:54.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| shahrukhx01 | 22 | transformers | |
shahukareem/wav2vec2-large-xlsr-53-dhivehi | 2021-03-28T08:47:31.000Z | [
"pytorch",
"wav2vec2",
"dv",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| shahukareem | 1,061 | transformers | ---
language: dv
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Shahu Kareem XLSR Wav2Vec2 Large 53 Dhivehi
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice dv
type: common_voice
args: dv
metrics:
- name: Test WER
type: wer
value: 32.85
---
# Wav2Vec2-Large-XLSR-53-Dhivehi
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Dhivehi using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "dv", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("shahukareem/wav2vec2-large-xlsr-53-dhivehi")
model = Wav2Vec2ForCTC.from_pretrained("shahukareem/wav2vec2-large-xlsr-53-dhivehi")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Dhivehi test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "dv", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("shahukareem/wav2vec2-large-xlsr-53-dhivehi")
model = Wav2Vec2ForCTC.from_pretrained("shahukareem/wav2vec2-large-xlsr-53-dhivehi")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\،\.\؟\!\'\"\–\’]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 32.85%
## Training
The Common Voice `train` and `validation` datasets were used for training.
## Example predictions
```--
reference: ކަރަންޓް ވައިރުކޮށް ބޮކި ހަރުކުރުން
predicted: ކަރަންޓް ވައިރުކޮށް ބޮކި ހަރުކުރުން
--
reference: ދެން އެކުދިންނާ ދިމާއަށް އަތް ދިށްކޮށްލެވެ
predicted: ދެން އެކުދިންނާ ދިމާއަށް އަތް ދިއްކޮށްލެވެ ް
--
reference: ރަކި ހިނިތުންވުމަކާއެކު އޭނާ އަމިއްލައަށް ތައާރަފްވި
predicted: ރަކި ހިނިތުންވުމަކާއެކު އޭނާ އަމިއްލައަށް ތައަރަފްވި
--
reference: ކޮޓަރީގެ ކުޑަދޮރުން ބޭރު ބަލަހައްޓައިގެން އިން ރޫނާގެ މޫނުމަތިން ފާޅުވަމުން ދިޔައީ ކަންބޮޑުވުމުގެ އަސަރުތައް
predicted: ކޮޓަރީގެ ކުޑަދޮރުން ބޭރު ބަލަހައްޓައިގެން އިން ރނާގެ މޫނުމަތިން ފާޅުވަމުން ދިޔައީ ކަންބޮޑުވުމުގެ އަސަރުތައް
--
``` |
sharad/transpin | 2021-03-31T19:33:20.000Z | []
| [
".gitattributes"
]
| sharad | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.