modelId
stringlengths 4
112
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 21
values | files
list | publishedBy
stringlengths 2
37
| downloads_last_month
int32 0
9.44M
| library
stringclasses 15
values | modelCard
large_stringlengths 0
100k
|
---|---|---|---|---|---|---|---|---|
narabzad/passage_reranker_large_bert | 2020-08-16T23:35:58.000Z | [
"pytorch",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| narabzad | 12 | transformers | ||
narabzad/saved | 2021-05-20T01:15:05.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| narabzad | 11 | transformers | |
narabzad/upload | 2020-08-15T16:56:43.000Z | [
"pytorch",
"transformers"
]
| [
".gitattributes",
"bert_config.json",
"config.json",
"pytorch_model.bin",
"vocab.txt"
]
| narabzad | 9 | transformers | ||
nasser/dyna | 2021-06-17T02:44:50.000Z | []
| [
".gitattributes",
"README.md"
]
| nasser | 0 | |||
nateraw/bert-base-uncased-ag-news | 2021-05-20T01:16:44.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"dataset:ag_news",
"transformers",
"ag_news",
"license:mit"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt",
".ipynb_checkpoints/README-checkpoint.md"
]
| nateraw | 66 | transformers | ---
language:
- en
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
tags:
- text-classification
- ag_news
- pytorch
license: MIT
datasets:
- ag_news
metrics:
- accuracy
---
# bert-base-uncased-ag-news
## Model description
`bert-base-uncased` finetuned on the AG News dataset using PyTorch Lightning. Sequence length 128, learning rate 2e-5, batch size 32, 4 T4 GPUs, 4 epochs. [The code can be found here](https://github.com/nateraw/hf-text-classification)
#### Limitations and bias
- Not the best model...
## Training data
Data came from HuggingFace's `datasets` package. The data can be viewed [on nlp viewer](https://huggingface.co/nlp/viewer/?dataset=ag_news).
## Training procedure
...
## Eval results
... |
nateraw/bert-base-uncased-emotion | 2021-05-20T01:18:38.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"dataset:emotion",
"transformers",
"emotion",
"license:apache-2.0"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt",
".ipynb_checkpoints/README-checkpoint.md"
]
| nateraw | 1,667 | transformers | ---
language:
- en
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
tags:
- text-classification
- emotion
- pytorch
license: apache-2.0
datasets:
- emotion
metrics:
- accuracy
---
# bert-base-uncased-emotion
## Model description
`bert-base-uncased` finetuned on the emotion dataset using PyTorch Lightning. Sequence length 128, learning rate 2e-5, batch size 32, 2 GPUs, 4 epochs.
For more details, please see, [the emotion dataset on nlp viewer](https://huggingface.co/nlp/viewer/?dataset=emotion).
#### Limitations and bias
- Not the best model, but it works in a pinch I guess...
- Code not available as I just hacked this together.
- [Follow me on github](https://github.com/nateraw) to get notified when code is made available.
## Training data
Data came from HuggingFace's `datasets` package. The data can be viewed [on nlp viewer](https://huggingface.co/nlp/viewer/?dataset=emotion).
## Training procedure
...
## Eval results
val_acc - 0.931 (useless, as this should be precision/recall/f1)
The score was calculated using PyTorch Lightning metrics.
|
nateraw/bert-base-uncased-imdb | 2021-05-20T01:19:33.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| nateraw | 171 | transformers | |
nateraw/pizza-pasta | 2021-06-16T00:14:08.000Z | [
"pytorch",
"vit",
"transformers",
"image-classification"
]
| image-classification | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin"
]
| nateraw | 63 | transformers | ---
tags:
- image-classification
- pytorch
---
# pizza-pasta
Autogenerated by a super cool Colab notebook 😎 |
nateraw/resnet101 | 2021-04-13T09:54:57.000Z | [
"pytorch",
"resnet",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| nateraw | 8 | transformers | ||
nateraw/resnet152 | 2021-04-13T10:00:38.000Z | [
"pytorch",
"resnet",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| nateraw | 6 | transformers | ||
nateraw/resnet18 | 2021-04-13T10:06:56.000Z | [
"pytorch",
"resnet",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| nateraw | 10 | transformers | ||
nateraw/resnet34 | 2021-04-13T10:09:31.000Z | [
"pytorch",
"resnet",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| nateraw | 9 | transformers | ||
nateraw/resnet50 | 2021-04-15T23:19:34.000Z | [
"pytorch",
"resnet",
"dataset:imagenet",
"transformers",
"image-classification"
]
| image-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin"
]
| nateraw | 121 | transformers | ---
tags:
- image-classification
- pytorch
datasets:
- imagenet
---
# Resnet50 Model from Torchvision
## Using the model
```
pip install modelz
```
```python
from modelz import ResnetModel
model = ResnetModel.from_pretrained('nateraw/resnet50')
ex_input = torch.rand(4, 3, 224, 224)
out = model(ex_input)
``` |
nateraw/resnext101_32x8d | 2021-04-13T10:12:21.000Z | [
"pytorch",
"resnet",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| nateraw | 6 | transformers | ||
nateraw/resnext50_32x4d | 2021-04-13T10:21:23.000Z | [
"pytorch",
"resnet",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| nateraw | 8 | transformers | ||
nateraw/vit-age-classifier | 2021-05-24T03:09:01.000Z | [
"pytorch",
"vit",
"dataset:fairface",
"transformers",
"image-classification"
]
| image-classification | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin"
]
| nateraw | 306 | transformers | ---
tags:
- image-classification
- pytorch
datasets:
- fairface
---
# ViT For Age Classification
A vision transformer finetuned to classify the age of a given person's face.
## Usage in Transformers
```python
import requests
from PIL import Image
from io import BytesIO
from transformers import ViTFeatureExtractor, ViTForImageClassification
# Get example image from official fairface repo + read it in as an image
r = requests.get('https://github.com/dchen236/FairFace/blob/master/detected_faces/race_Asian_face0.jpg?raw=true')
im = Image.open(BytesIO(r.content))
# Init model, transforms
model = ViTForImageClassification.from_pretrained('nateraw/vit-age-classifier')
transforms = ViTFeatureExtractor.from_pretrained('nateraw/vit-age-classifier')
# Transform our image and pass it through the model
inputs = transforms(im, return_tensors='pt')
output = model(**inputs)
# Predicted Class probabilities
proba = output.logits.softmax(1)
# Predicted Classes
preds = proba.argmax(1)
``` |
nateraw/vit-base-patch16-224-cifar10 | 2021-04-04T21:28:35.000Z | [
"pytorch",
"vit",
"dataset:cifar10",
"transformers",
"image-classification",
"license:apache-2.0"
]
| image-classification | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin"
]
| nateraw | 75 | transformers | ---
tags:
- image-classification
- pytorch
license: apache-2.0
datasets:
- cifar10
metrics:
- accuracy
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
---
# Vision Transformer Fine Tuned on CIFAR10
Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) and **fine-tuned on CIFAR10** at resolution 224x224.
Check out the code at my [my Github repo](https://github.com/nateraw/huggingface-vit-finetune).
## Usage
```python
from transformers import ViTFeatureExtractor, ViTForImageClassification
from PIL import Image
import requests
url = 'https://www.cs.toronto.edu/~kriz/cifar-10-sample/dog10.png'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = ViTFeatureExtractor.from_pretrained('nateraw/vit-base-patch16-224-cifar10')
model = ViTForImageClassification.from_pretrained('nateraw/vit-base-patch16-224-cifar10')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
preds = outputs.logits.argmax(dim=1)
classes = [
'airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'
]
classes[preds[0]]
```
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
Note that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification).
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
|
nateraw/vit-race-classifier | 2021-05-24T15:42:03.000Z | [
"pytorch",
"vit",
"dataset:fairface",
"transformers",
"image-classification"
]
| image-classification | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin"
]
| nateraw | 98 | transformers | ---
tags:
- image-classification
- pytorch
datasets:
- fairface
---
# ViT For Race Classification
A vision transformer finetuned to classify the race of a given person's face.
## Usage in Transformers
```python
import requests
from PIL import Image
from io import BytesIO
from transformers import ViTFeatureExtractor, ViTForImageClassification
# Get example image from official fairface repo + read it in as an image
url = 'https://github.com/dchen236/FairFace/blob/master/detected_faces/race_Asian_face0.jpg?raw=true'
r = requests.get(url)
im = Image.open(BytesIO(r.content))
# Init model, transforms
model = ViTForImageClassification.from_pretrained('nateraw/vit-race-classifier')
transforms = ViTFeatureExtractor.from_pretrained('nateraw/vit-race-classifier')
# Transform our image and pass it through the model
inputs = transforms(im, return_tensors='pt')
output = model(**inputs)
# Predicted Class probabilities
proba = output.logits.softmax(1)
# Predicted Classes
preds = proba.argmax(1)
``` |
nateraw/wide_resnet101_2 | 2021-04-13T10:26:39.000Z | [
"pytorch",
"resnet",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| nateraw | 7 | transformers | ||
nateraw/wide_resnet50_2 | 2021-04-13T10:42:01.000Z | [
"pytorch",
"resnet",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| nateraw | 7 | transformers | ||
navid-rekabsaz/advbert_ranker_l2 | 2021-06-04T17:00:02.000Z | [
"pytorch",
"bert",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin"
]
| navid-rekabsaz | 25 | transformers | ## Welcome |
|
navid-rekabsaz/advbert_ranker_l4 | 2021-06-04T17:01:05.000Z | [
"pytorch",
"bert",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin"
]
| navid-rekabsaz | 18 | transformers | ## Welcome |
|
navjordj/norberta | 2021-06-17T10:08:19.000Z | [
"roberta",
"transformers"
]
| [
".gitattributes",
"config.json",
"merges.txt",
"vocab.json"
]
| navjordj | 2 | transformers | ||
navsynergy/test | 2021-04-13T07:23:16.000Z | []
| [
".gitattributes"
]
| navsynergy | 0 | |||
navteca/bart-large-mnli | 2021-04-16T15:39:56.000Z | []
| [
".gitattributes"
]
| navteca | 0 | |||
navteca/electra-base-squad2 | 2021-04-06T16:16:43.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| navteca | 29 | transformers | # Electra base model for QA (SQuAD 2.0)
This model uses [electra-base](https://huggingface.co/google/electra-base-discriminator).
## Usage and Performance
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
# Load model & tokenizer
electra_model = AutoModelForQuestionAnswering.from_pretrained('navteca/electra-base-squad2')
electra_tokenizer = AutoTokenizer.from_pretrained('navteca/electra-base-squad2')
# Get predictions
nlp = pipeline('question-answering', model=electra_model, tokenizer=electra_tokenizer)
result = nlp({
'question': 'How many people live in Berlin?',
'context': 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.'
})
print(result)
```
|
navteca/ms-marco-electra-base | 2021-04-06T16:23:39.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"CEBinaryClassificationEvaluator_MS-Marco_results.csv",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| navteca | 9 | transformers | # Cross-Encoder for MS Marco
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
This model uses [electra-base](https://huggingface.co/google/electra-base-discriminator).
## Training Data
This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) dataset. The model will predict a score between 0 and 1: Given a question and paragraph, can the question be answered by the paragraph?.
## Usage and Performance
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name')
scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2')])
print(scores)
```
|
navteca/qnli-electra-base | 2021-04-06T16:24:59.000Z | [
"pytorch",
"electra",
"text-classification",
"arxiv:1804.07461",
"transformers"
]
| text-classification | [
".gitattributes",
"CEBinaryAccuracyEvaluator_qnli-dev_results.csv",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| navteca | 27 | transformers | # Cross-Encoder for QNLI
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
This model uses [electra-base](https://huggingface.co/google/electra-base-discriminator).
## Training Data
Given a question and paragraph, can the question be answered by the paragraph? The models have been trained on the [GLUE QNLI](https://arxiv.org/abs/1804.07461) dataset, which transformed the [SQuAD dataset](https://rajpurkar.github.io/SQuAD-explorer/) into an NLI task.
## Usage and Performance
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name')
scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2')])
print(scores)
```
|
navteca/quora-roberta-base | 2021-05-20T18:39:12.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"CEBinaryClassificationEvaluator_Quora-dev_results.csv",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| navteca | 10 | transformers | # Cross-Encoder for Quora Duplicate Questions Detection
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
This model uses [roberta-base](https://huggingface.co/roberta-base).
## Training Data
This model was trained on the [Quora Duplicate Questions](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) dataset. The model will predict a score between 0 and 1: How likely the two given questions are duplicates.
Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates.
## Usage and Performance
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name')
scores = model.predict([('Question 1', 'Question 2'), ('Question 3', 'Question 4')])
print(scores)
```
|
navteca/quora-roberta-large | 2021-05-20T18:41:34.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"CEBinaryClassificationEvaluator_Quora-dev_results.csv",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| navteca | 35 | transformers | # Cross-Encoder for Quora Duplicate Questions Detection
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
This model uses [roberta-large](https://huggingface.co/roberta-large).
## Training Data
This model was trained on the [Quora Duplicate Questions](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) dataset. The model will predict a score between 0 and 1: How likely the two given questions are duplicates.
Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates.
## Usage and Performance
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name')
scores = model.predict([('Question 1', 'Question 2'), ('Question 3', 'Question 4')])
print(scores)
```
|
navteca/roberta-base-squad2 | 2021-05-20T18:43:09.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| navteca | 483 | transformers | # Roberta base model for QA (SQuAD 2.0)
This model uses [roberta-base](https://huggingface.co/roberta-base).
## Usage and Performance
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
# Load model & tokenizer
roberta_model = AutoModelForQuestionAnswering.from_pretrained('navteca/roberta-base-squad2')
roberta_tokenizer = AutoTokenizer.from_pretrained('navteca/roberta-base-squad2')
# Get predictions
nlp = pipeline('question-answering', model=roberta_model, tokenizer=roberta_tokenizer)
result = nlp({
'question': 'How many people live in Berlin?',
'context': 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.'
})
print(result)
```
|
navteca/roberta-large-squad2 | 2021-05-20T18:45:18.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| navteca | 48 | transformers | # Roberta large model for QA (SQuAD 2.0)
This model uses [roberta-large](https://huggingface.co/roberta-large).
## Usage and Performance
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
# Load model & tokenizer
roberta_model = AutoModelForQuestionAnswering.from_pretrained('navteca/roberta-large-squad2')
roberta_tokenizer = AutoTokenizer.from_pretrained('navteca/roberta-large-squad2')
# Get predictions
nlp = pipeline('question-answering', model=roberta_model, tokenizer=roberta_tokenizer)
result = nlp({
'question': 'How many people live in Berlin?',
'context': 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.'
})
print(result)
```
|
nazmiasri/property-description-gpt2 | 2021-05-23T10:45:19.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"eval_results_lm.txt",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| nazmiasri | 62 | transformers | |
nboost/pt-bert-base-uncased-msmarco | 2021-05-20T01:23:41.000Z | [
"pytorch",
"jax",
"onnx",
"bert",
"transformers"
]
| [
".gitattributes",
"added_tokens.json",
"bert-base-msmarco.onnx",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| nboost | 2,588 | transformers | ||
nboost/pt-bert-large-msmarco | 2021-05-20T01:25:29.000Z | [
"pytorch",
"jax",
"onnx",
"bert",
"transformers"
]
| [
".gitattributes",
"bert-large-msmarco.onnx",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| nboost | 1,085 | transformers | ||
nboost/pt-biobert-base-msmarco | 2021-05-20T01:27:20.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| nboost | 26 | transformers | ||
nboost/pt-tinybert-msmarco | 2021-05-20T01:28:00.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"vocab.txt"
]
| nboost | 9,062 | transformers | ||
nbouali/flaubert-base-uncased-finetuned-cooking | 2021-04-28T16:02:59.000Z | [
"pytorch",
"flaubert",
"text-classification",
"fr",
"transformers",
"french",
"flaubert-base-uncased"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| nbouali | 37 | transformers | ---
language: fr
tags:
- text-classification
- flaubert
- french
- flaubert-base-uncased
widget:
- text: "Lasagnes à la bolognaise"
---
# FlauBERT finetuned on French cooking recipes
This model is finetuned on a sequence classification task that associates each sequence with the appropriate recipe category.
### How to use it?
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import TextClassificationPipeline
loaded_tokenizer = AutoTokenizer.from_pretrained("nbouali/flaubert-base-uncased-finetuned-cooking")
loaded_model = AutoModelForSequenceClassification.from_pretrained("nbouali/flaubert-base-uncased-finetuned-cooking")
nlp = TextClassificationPipeline(model=loaded_model,tokenizer=loaded_tokenizer,task="Recipe classification")
print(nlp("Lasagnes à la bolognaise"))
```
```
[{'label': 'LABEL_6', 'score': 0.9921900033950806}]
```
### Label encoding:
| label | Recipe Category |
|:------:|:--------------:|
| 0 |'Accompagnement' |
| 1 | 'Amuse-gueule' |
| 2 | 'Boisson' |
| 3 | 'Confiserie' |
| 4 | 'Dessert'|
| 5 | 'Entrée' |
| 6 |'Plat principal' |
| 7 | 'Sauce' |
<br/>
<br/>
> If you would like to know more about this model you can refer to [our blog post](https://medium.com/unify-data-office/a-cooking-language-model-fine-tuned-on-dozens-of-thousands-of-french-recipes-bcdb8e560571) |
nchervyakov/super-model | 2021-05-20T01:28:44.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| nchervyakov | 13 | transformers | hello |
ncoop57/DiGPTame-medium | 2021-05-23T10:47:17.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"arxiv:1911.00536",
"transformers",
"conversational",
"license:mit",
"text-generation"
]
| conversational | [
".gitattributes",
"README.md",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| ncoop57 | 61 | transformers | ---
tags:
- conversational
license: mit
---
## Finetuned DialoGPT model on Spanish Conversations
This model was finetuned from the original [DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) model on subtitles from Spanish movies and telenovelas from the awesome [OpenSubtitle dataset](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/opensubtitles).
DialoGPT paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536)
Some example dialog from the model:
|Role | Response |
|---------|--------|
|User | ¿Cómo está el clima hoy? |
| Bot | Muy bien. |
|User | Eso es bueno escuchar. ¿Vas a ir a la playa? |
| Bot | No, también puedo regresar. |
|User | Oh bien, bueno, iré a la playa más tarde. |
| Bot | ¡No! |
|User | ¿Qué está mal con eso? |
| Bot | ¡Cómo que no! |
|User | Estoy confundido, ¿por qué no puedo ir a la playa? |
| Bot | ¡Cómo que no! |
|User | Explícamelo por favor. |
| Bot | ¡No! |
## Using the model
Example code for trying out the model (taken directly from the [DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) model card):
```python
from transformers import AutoModelWithLMHead, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("ncoop57/DiGPTame-medium")
model = AutoModelWithLMHead.from_pretrained("ncoop57/DiGPTame-medium")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
## Training your own model
If you would like to finetune your own model or finetune this Spanish model, please checkout my blog post on that exact topic!
https://nathancooper.io/i-am-a-nerd/chatbot/deep-learning/gpt2/2020/05/12/chatbot-part-1.html |
ncoop57/bart-base-code-summarizer-java-v0 | 2020-12-11T21:56:54.000Z | [
"pytorch",
"bart",
"seq2seq",
"transformers",
"summarization",
"license:mit",
"text2text-generation"
]
| summarization | [
".gitattributes",
"README.md",
"config.json",
"config.json.old",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| ncoop57 | 110 | transformers | ---
tags:
- summarization
license: mit
---
## ncoop57/bart-base-code-summarizer-java-v0
|
ncoop57/codeformer-code-java | 2021-06-07T02:35:04.000Z | [
"pytorch",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| ncoop57 | 1 | transformers | ||
ncoop57/codeformer-code | 2021-06-07T00:52:51.000Z | [
"pytorch",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| ncoop57 | 3 | transformers | ||
ncoop57/multilingual-codesearch | 2021-04-03T03:06:55.000Z | [
"pytorch",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| ncoop57 | 8 | transformers | ||
ncoop57/testmodel | 2021-06-07T00:41:20.000Z | [
"pytorch",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| ncoop57 | 1 | transformers | ||
ndevavarapu/t5_paraphrase | 2020-11-19T08:40:53.000Z | []
| [
".gitattributes"
]
| ndevavarapu | 0 | |||
ndevavarapu/utterance_gen | 2020-11-19T09:15:43.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| ndevavarapu | 11 | transformers | |
neal2021/test_model | 2021-03-13T15:33:02.000Z | []
| [
".gitattributes"
]
| neal2021 | 0 | |||
nefter/hugging-face | 2021-03-04T17:56:53.000Z | []
| [
".gitattributes",
"README.md"
]
| nefter | 0 | |||
neuralmind/bert-base-portuguese-cased | 2021-05-20T01:29:43.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"pt",
"dataset:brWaC",
"transformers",
"license:mit",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| neuralmind | 16,042 | transformers | ---
language: pt
license: mit
tags:
- bert
- pytorch
datasets:
- brWaC
---
# BERTimbau Base (aka "bert-base-portuguese-cased")

## Introduction
BERTimbau Base is a pretrained BERT model for Brazilian Portuguese that achieves state-of-the-art performances on three downstream NLP tasks: Named Entity Recognition, Sentence Textual Similarity and Recognizing Textual Entailment. It is available in two sizes: Base and Large.
For further information or requests, please go to [BERTimbau repository](https://github.com/neuralmind-ai/portuguese-bert/).
## Available models
| Model | Arch. | #Layers | #Params |
| ---------------------------------------- | ---------- | ------- | ------- |
| `neuralmind/bert-base-portuguese-cased` | BERT-Base | 12 | 110M |
| `neuralmind/bert-large-portuguese-cased` | BERT-Large | 24 | 335M |
## Usage
```python
from transformers import AutoTokenizer # Or BertTokenizer
from transformers import AutoModelForPreTraining # Or BertForPreTraining for loading pretraining heads
from transformers import AutoModel # or BertModel, for BERT without pretraining heads
model = AutoModelForPreTraining.from_pretrained('neuralmind/bert-base-portuguese-cased')
tokenizer = AutoTokenizer.from_pretrained('neuralmind/bert-base-portuguese-cased', do_lower_case=False)
```
### Masked language modeling prediction example
```python
from transformers import pipeline
pipe = pipeline('fill-mask', model=model, tokenizer=tokenizer)
pipe('Tinha uma [MASK] no meio do caminho.')
# [{'score': 0.14287759363651276,
# 'sequence': '[CLS] Tinha uma pedra no meio do caminho. [SEP]',
# 'token': 5028,
# 'token_str': 'pedra'},
# {'score': 0.06213393807411194,
# 'sequence': '[CLS] Tinha uma árvore no meio do caminho. [SEP]',
# 'token': 7411,
# 'token_str': 'árvore'},
# {'score': 0.05515013635158539,
# 'sequence': '[CLS] Tinha uma estrada no meio do caminho. [SEP]',
# 'token': 5675,
# 'token_str': 'estrada'},
# {'score': 0.0299188531935215,
# 'sequence': '[CLS] Tinha uma casa no meio do caminho. [SEP]',
# 'token': 1105,
# 'token_str': 'casa'},
# {'score': 0.025660505518317223,
# 'sequence': '[CLS] Tinha uma cruz no meio do caminho. [SEP]',
# 'token': 3466,
# 'token_str': 'cruz'}]
```
### For BERT embeddings
```python
import torch
model = AutoModel.from_pretrained('neuralmind/bert-base-portuguese-cased')
input_ids = tokenizer.encode('Tinha uma pedra no meio do caminho.', return_tensors='pt')
with torch.no_grad():
outs = model(input_ids)
encoded = outs[0][0, 1:-1] # Ignore [CLS] and [SEP] special tokens
# encoded.shape: (8, 768)
# tensor([[-0.0398, -0.3057, 0.2431, ..., -0.5420, 0.1857, -0.5775],
# [-0.2926, -0.1957, 0.7020, ..., -0.2843, 0.0530, -0.4304],
# [ 0.2463, -0.1467, 0.5496, ..., 0.3781, -0.2325, -0.5469],
# ...,
# [ 0.0662, 0.7817, 0.3486, ..., -0.4131, -0.2852, -0.2819],
# [ 0.0662, 0.2845, 0.1871, ..., -0.2542, -0.2933, -0.0661],
# [ 0.2761, -0.1657, 0.3288, ..., -0.2102, 0.0029, -0.2009]])
```
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{souza2020bertimbau,
author = {F{\'a}bio Souza and
Rodrigo Nogueira and
Roberto Lotufo},
title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese},
booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)},
year = {2020}
}
```
|
neuralmind/bert-large-portuguese-cased | 2021-05-20T01:31:09.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"pt",
"dataset:brWaC",
"transformers",
"license:mit",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| neuralmind | 18,250 | transformers | ---
language: pt
license: mit
tags:
- bert
- pytorch
datasets:
- brWaC
---
# BERTimbau Large (aka "bert-large-portuguese-cased")

## Introduction
BERTimbau Large is a pretrained BERT model for Brazilian Portuguese that achieves state-of-the-art performances on three downstream NLP tasks: Named Entity Recognition, Sentence Textual Similarity and Recognizing Textual Entailment. It is available in two sizes: Base and Large.
For further information or requests, please go to [BERTimbau repository](https://github.com/neuralmind-ai/portuguese-bert/).
## Available models
| Model | Arch. | #Layers | #Params |
| ---------------------------------------- | ---------- | ------- | ------- |
| `neuralmind/bert-base-portuguese-cased` | BERT-Base | 12 | 110M |
| `neuralmind/bert-large-portuguese-cased` | BERT-Large | 24 | 335M |
## Usage
```python
from transformers import AutoTokenizer # Or BertTokenizer
from transformers import AutoModelForPreTraining # Or BertForPreTraining for loading pretraining heads
from transformers import AutoModel # or BertModel, for BERT without pretraining heads
model = AutoModelForPreTraining.from_pretrained('neuralmind/bert-large-portuguese-cased')
tokenizer = AutoTokenizer.from_pretrained('neuralmind/bert-large-portuguese-cased', do_lower_case=False)
```
### Masked language modeling prediction example
```python
from transformers import pipeline
pipe = pipeline('fill-mask', model=model, tokenizer=tokenizer)
pipe('Tinha uma [MASK] no meio do caminho.')
# [{'score': 0.5054386258125305,
# 'sequence': '[CLS] Tinha uma pedra no meio do caminho. [SEP]',
# 'token': 5028,
# 'token_str': 'pedra'},
# {'score': 0.05616172030568123,
# 'sequence': '[CLS] Tinha uma curva no meio do caminho. [SEP]',
# 'token': 9562,
# 'token_str': 'curva'},
# {'score': 0.02348282001912594,
# 'sequence': '[CLS] Tinha uma parada no meio do caminho. [SEP]',
# 'token': 6655,
# 'token_str': 'parada'},
# {'score': 0.01795753836631775,
# 'sequence': '[CLS] Tinha uma mulher no meio do caminho. [SEP]',
# 'token': 2606,
# 'token_str': 'mulher'},
# {'score': 0.015246033668518066,
# 'sequence': '[CLS] Tinha uma luz no meio do caminho. [SEP]',
# 'token': 3377,
# 'token_str': 'luz'}]
```
### For BERT embeddings
```python
import torch
model = AutoModel.from_pretrained('neuralmind/bert-large-portuguese-cased')
input_ids = tokenizer.encode('Tinha uma pedra no meio do caminho.', return_tensors='pt')
with torch.no_grad():
outs = model(input_ids)
encoded = outs[0][0, 1:-1] # Ignore [CLS] and [SEP] special tokens
# encoded.shape: (8, 1024)
# tensor([[ 1.1872, 0.5606, -0.2264, ..., 0.0117, -0.1618, -0.2286],
# [ 1.3562, 0.1026, 0.1732, ..., -0.3855, -0.0832, -0.1052],
# [ 0.2988, 0.2528, 0.4431, ..., 0.2684, -0.5584, 0.6524],
# ...,
# [ 0.3405, -0.0140, -0.0748, ..., 0.6649, -0.8983, 0.5802],
# [ 0.1011, 0.8782, 0.1545, ..., -0.1768, -0.8880, -0.1095],
# [ 0.7912, 0.9637, -0.3859, ..., 0.2050, -0.1350, 0.0432]])
```
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{souza2020bertimbau,
author = {F{\'a}bio Souza and
Rodrigo Nogueira and
Roberto Lotufo},
title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese},
booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)},
year = {2020}
}
```
|
neuralspace/indic-transformers-hi-distilbert | 2020-10-27T15:02:32.000Z | [
"pytorch",
"tf",
"distilbert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| neuralspace | 13 | transformers | |
neuralspace-reverie/indic-transformers-bn-bert | 2021-05-20T01:33:26.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"bn",
"transformers",
"MaskedLM",
"Bengali",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"vocab.txt"
]
| neuralspace-reverie | 163 | transformers | ---
language:
- bn
tags:
- MaskedLM
- Bengali
---
# Indic-Transformers Bengali BERT
## Model description
This is a BERT language model pre-trained on ~3 GB of monolingual training corpus. The pre-training data was majorly taken from [OSCAR](https://oscar-corpus.com/).
This model can be fine-tuned on various downstream tasks like text-classification, POS-tagging, question-answering, etc. Embeddings from this model can also be used for feature-based training.
## Intended uses & limitations
#### How to use
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('neuralspace-reverie/indic-transformers-bn-bert')
model = AutoModel.from_pretrained('neuralspace-reverie/indic-transformers-bn-bert')
text = "আপনি কেমন আছেন?"
input_ids = tokenizer(text, return_tensors='pt')['input_ids']
out = model(input_ids)[0]
print(out.shape)
# out = [1, 6, 768]
```
#### Limitations and bias
The original language model has been trained using `PyTorch` and hence the use of `pytorch_model.bin` weights file is recommended. The h5 file for `Tensorflow` has been generated manually by commands suggested [here](https://huggingface.co/transformers/model_sharing.html).
|
neuralspace-reverie/indic-transformers-bn-distilbert | 2020-12-11T21:57:07.000Z | [
"pytorch",
"tf",
"distilbert",
"masked-lm",
"bn",
"transformers",
"MaskedLM",
"Bengali",
"DistilBERT",
"Question-Answering",
"Token Classification",
"Text Classification",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| neuralspace-reverie | 102 | transformers | ---
language:
- bn
tags:
- MaskedLM
- Bengali
- DistilBERT
- Question-Answering
- Token Classification
- Text Classification
---
# Indic-Transformers Bengali DistilBERT
## Model description
This is a DistilBERT language model pre-trained on ~6 GB of monolingual training corpus. The pre-training data was majorly taken from [OSCAR](https://oscar-corpus.com/).
This model can be fine-tuned on various downstream tasks like text-classification, POS-tagging, question-answering, etc. Embeddings from this model can also be used for feature-based training.
## Intended uses & limitations
#### How to use
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('neuralspace-reverie/indic-transformers-bn-distilbert')
model = AutoModel.from_pretrained('neuralspace-reverie/indic-transformers-bn-distilbert')
text = "আপনি কেমন আছেন?"
input_ids = tokenizer(text, return_tensors='pt')['input_ids']
out = model(input_ids)[0]
print(out.shape)
# out = [1, 5, 768]
```
#### Limitations and bias
The original language model has been trained using `PyTorch` and hence the use of `pytorch_model.bin` weights file is recommended. The h5 file for `Tensorflow` has been generated manually by commands suggested [here](https://huggingface.co/transformers/model_sharing.html).
|
neuralspace-reverie/indic-transformers-bn-roberta | 2021-05-20T18:47:17.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"masked-lm",
"bn",
"transformers",
"MaskedLM",
"Bengali",
"RoBERTa",
"Question-Answering",
"Token Classification",
"Text Classification",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
]
| neuralspace-reverie | 350 | transformers | ---
language:
- bn
tags:
- MaskedLM
- Bengali
- RoBERTa
- Question-Answering
- Token Classification
- Text Classification
---
# Indic-Transformers Bengali RoBERTa
## Model description
This is a RoBERTa language model pre-trained on ~6 GB of monolingual training corpus. The pre-training data was majorly taken from [OSCAR](https://oscar-corpus.com/).
This model can be fine-tuned on various downstream tasks like text-classification, POS-tagging, question-answering, etc. Embeddings from this model can also be used for feature-based training.
## Intended uses & limitations
#### How to use
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('neuralspace-reverie/indic-transformers-bn-roberta')
model = AutoModel.from_pretrained('neuralspace-reverie/indic-transformers-bn-roberta')
text = "আপনি কেমন আছেন?"
input_ids = tokenizer(text, return_tensors='pt')['input_ids']
out = model(input_ids)[0]
print(out.shape)
# out = [1, 10, 768]
```
#### Limitations and bias
The original language model has been trained using `PyTorch` and hence the use of `pytorch_model.bin` weights file is recommended. The h5 file for `Tensorflow` has been generated manually by commands suggested [here](https://huggingface.co/transformers/model_sharing.html).
|
neuralspace-reverie/indic-transformers-bn-xlmroberta | 2020-12-11T21:57:15.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"masked-lm",
"bn",
"transformers",
"MaskedLM",
"Bengali",
"XLMRoBERTa",
"Question-Answering",
"Token Classification",
"Text Classification",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"sentencepiece.bpe.vocab",
"tf_model.h5"
]
| neuralspace-reverie | 27 | transformers | ---
language:
- bn
tags:
- MaskedLM
- Bengali
- XLMRoBERTa
- Question-Answering
- Token Classification
- Text Classification
---
# Indic-Transformers Bengali XLMRoBERTa
## Model description
This is a XLMRoBERTa language model pre-trained on ~3 GB of monolingual training corpus. The pre-training data was majorly taken from [OSCAR](https://oscar-corpus.com/).
This model can be fine-tuned on various downstream tasks like text-classification, POS-tagging, question-answering, etc. Embeddings from this model can also be used for feature-based training.
## Intended uses & limitations
#### How to use
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('neuralspace-reverie/indic-transformers-bn-xlmroberta')
model = AutoModel.from_pretrained('neuralspace-reverie/indic-transformers-bn-xlmroberta')
text = "আপনি কেমন আছেন?"
input_ids = tokenizer(text, return_tensors='pt')['input_ids']
out = model(input_ids)[0]
print(out.shape)
# out = [1, 5, 768]
```
#### Limitations and bias
The original language model has been trained using `PyTorch` and hence the use of `pytorch_model.bin` weights file is recommended. The h5 file for `Tensorflow` has been generated manually by commands suggested [here](https://huggingface.co/transformers/model_sharing.html).
|
neuralspace-reverie/indic-transformers-hi-bert | 2021-05-20T01:35:03.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"hi",
"transformers",
"MaskedLM",
"Hindi",
"BERT",
"Question-Answering",
"Token Classification",
"Text Classification",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"vocab.txt"
]
| neuralspace-reverie | 137 | transformers | ---
language:
- hi
tags:
- MaskedLM
- Hindi
- BERT
- Question-Answering
- Token Classification
- Text Classification
---
# Indic-Transformers Hindi BERT
## Model description
This is a BERT language model pre-trained on ~3 GB of monolingual training corpus. The pre-training data was majorly taken from [OSCAR](https://oscar-corpus.com/).
This model can be fine-tuned on various downstream tasks like text-classification, POS-tagging, question-answering, etc. Embeddings from this model can also be used for feature-based training.
## Intended uses & limitations
#### How to use
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('neuralspace-reverie/indic-transformers-hi-bert')
model = AutoModel.from_pretrained('neuralspace-reverie/indic-transformers-hi-bert')
text = "आपका स्वागत हैं"
input_ids = tokenizer(text, return_tensors='pt')['input_ids']
out = model(input_ids)[0]
print(out.shape)
# out = [1, 5, 768]
```
#### Limitations and bias
The original language model has been trained using `PyTorch` and hence the use of `pytorch_model.bin` weights file is recommended. The h5 file for `Tensorflow` has been generated manually by commands suggested [here](https://huggingface.co/transformers/model_sharing.html).
|
neuralspace-reverie/indic-transformers-hi-distilbert | 2020-12-11T21:57:21.000Z | [
"pytorch",
"tf",
"distilbert",
"masked-lm",
"hi",
"transformers",
"MaskedLM",
"Hindi",
"DistilBERT",
"Question-Answering",
"Token Classification",
"Text Classification",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| neuralspace-reverie | 196 | transformers | ---
language:
- hi
tags:
- MaskedLM
- Hindi
- DistilBERT
- Question-Answering
- Token Classification
- Text Classification
---
# Indic-Transformers Hindi DistilBERT
## Model description
This is a DistilBERT language model pre-trained on ~10 GB of monolingual training corpus. The pre-training data was majorly taken from [OSCAR](https://oscar-corpus.com/).
This model can be fine-tuned on various downstream tasks like text-classification, POS-tagging, question-answering, etc. Embeddings from this model can also be used for feature-based training.
## Intended uses & limitations
#### How to use
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('neuralspace-reverie/indic-transformers-hi-distilbert')
model = AutoModel.from_pretrained('neuralspace-reverie/indic-transformers-hi-distilbert')
text = "आपका स्वागत हैं"
input_ids = tokenizer(text, return_tensors='pt')['input_ids']
out = model(input_ids)[0]
print(out.shape)
# out = [1, 5, 768]
```
#### Limitations and bias
The original language model has been trained using `PyTorch` and hence the use of `pytorch_model.bin` weights file is recommended. The h5 file for `Tensorflow` has been generated manually by commands suggested [here](https://huggingface.co/transformers/model_sharing.html).
|
neuralspace-reverie/indic-transformers-hi-roberta | 2021-05-20T18:48:28.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"masked-lm",
"hi",
"transformers",
"MaskedLM",
"Hindi",
"RoBERTa",
"Question-Answering",
"Token Classification",
"Text Classification",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
]
| neuralspace-reverie | 64 | transformers | ---
language:
- hi
tags:
- MaskedLM
- Hindi
- RoBERTa
- Question-Answering
- Token Classification
- Text Classification
---
# Indic-Transformers Hindi RoBERTa
## Model description
This is a RoBERTa language model pre-trained on ~10 GB of monolingual training corpus. The pre-training data was majorly taken from [OSCAR](https://oscar-corpus.com/).
This model can be fine-tuned on various downstream tasks like text-classification, POS-tagging, question-answering, etc. Embeddings from this model can also be used for feature-based training.
## Intended uses & limitations
#### How to use
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('neuralspace-reverie/indic-transformers-hi-roberta')
model = AutoModel.from_pretrained('neuralspace-reverie/indic-transformers-hi-roberta')
text = "आपका स्वागत हैं"
input_ids = tokenizer(text, return_tensors='pt')['input_ids']
out = model(input_ids)[0]
print(out.shape)
# out = [1, 11, 768]
```
#### Limitations and bias
The original language model has been trained using `PyTorch` and hence the use of `pytorch_model.bin` weights file is recommended. The h5 file for `Tensorflow` has been generated manually by commands suggested [here](https://huggingface.co/transformers/model_sharing.html).
|
neuralspace-reverie/indic-transformers-hi-xlmroberta | 2020-12-11T21:57:29.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"masked-lm",
"hi",
"transformers",
"MaskedLM",
"Hindi",
"XLMRoBERTa",
"Question-Answering",
"Token Classification",
"Text Classification",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"sentencepiece.bpe.vocab",
"tf_model.h5"
]
| neuralspace-reverie | 32 | transformers | ---
language:
- hi
tags:
- MaskedLM
- Hindi
- XLMRoBERTa
- Question-Answering
- Token Classification
- Text Classification
---
# Indic-Transformers Hindi XLMRoBERTa
## Model description
This is a XLMRoBERTa language model pre-trained on ~3 GB of monolingual training corpus. The pre-training data was majorly taken from [OSCAR](https://oscar-corpus.com/).
This model can be fine-tuned on various downstream tasks like text-classification, POS-tagging, question-answering, etc. Embeddings from this model can also be used for feature-based training.
## Intended uses & limitations
#### How to use
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('neuralspace-reverie/indic-transformers-hi-xlmroberta')
model = AutoModel.from_pretrained('neuralspace-reverie/indic-transformers-hi-xlmroberta')
text = "आपका स्वागत हैं"
input_ids = tokenizer(text, return_tensors='pt')['input_ids']
out = model(input_ids)[0]
print(out.shape)
# out = [1, 5, 768]
```
#### Limitations and bias
The original language model has been trained using `PyTorch` and hence the use of `pytorch_model.bin` weights file is recommended. The h5 file for `Tensorflow` has been generated manually by commands suggested [here](https://huggingface.co/transformers/model_sharing.html).
|
neuralspace-reverie/indic-transformers-te-bert | 2021-05-20T01:37:01.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"te",
"transformers",
"MaskedLM",
"Telugu",
"BERT",
"Question-Answering",
"Token Classification",
"Text Classification",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"training_args.bin",
"vocab.txt"
]
| neuralspace-reverie | 10 | transformers | ---
language:
- te
tags:
- MaskedLM
- Telugu
- BERT
- Question-Answering
- Token Classification
- Text Classification
---
# Indic-Transformers Telugu BERT
## Model description
This is a BERT language model pre-trained on ~1.6 GB of monolingual training corpus. The pre-training data was majorly taken from [OSCAR](https://oscar-corpus.com/).
This model can be fine-tuned on various downstream tasks like text-classification, POS-tagging, question-answering, etc. Embeddings from this model can also be used for feature-based training.
## Intended uses & limitations
#### How to use
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('neuralspace-reverie/indic-transformers-te-bert')
model = AutoModel.from_pretrained('neuralspace-reverie/indic-transformers-te-bert')
text = "మీరు ఎలా ఉన్నారు"
input_ids = tokenizer(text, return_tensors='pt')['input_ids']
out = model(input_ids)[0]
print(out.shape)
# out = [1, 5, 768]
```
#### Limitations and bias
The original language model has been trained using `PyTorch` and hence the use of `pytorch_model.bin` weights file is recommended. The h5 file for `Tensorflow` has been generated manually by commands suggested [here](https://huggingface.co/transformers/model_sharing.html).
|
neuralspace-reverie/indic-transformers-te-distilbert | 2020-12-11T21:57:36.000Z | [
"pytorch",
"tf",
"distilbert",
"masked-lm",
"te",
"transformers",
"MaskedLM",
"Telugu",
"DistilBERT",
"Question-Answering",
"Token Classification",
"Text Classification",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| neuralspace-reverie | 19 | transformers | ---
language:
- te
tags:
- MaskedLM
- Telugu
- DistilBERT
- Question-Answering
- Token Classification
- Text Classification
---
# Indic-Transformers Telugu DistilBERT
## Model description
This is a DistilBERT language model pre-trained on ~2 GB of monolingual training corpus. The pre-training data was majorly taken from [OSCAR](https://oscar-corpus.com/).
This model can be fine-tuned on various downstream tasks like text-classification, POS-tagging, question-answering, etc. Embeddings from this model can also be used for feature-based training.
## Intended uses & limitations
#### How to use
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('neuralspace-reverie/indic-transformers-te-distilbert')
model = AutoModel.from_pretrained('neuralspace-reverie/indic-transformers-te-distilbert')
text = "మీరు ఎలా ఉన్నారు"
input_ids = tokenizer(text, return_tensors='pt')['input_ids']
out = model(input_ids)[0]
print(out.shape)
# out = [1, 5, 768]
```
#### Limitations and bias
The original language model has been trained using `PyTorch` and hence the use of `pytorch_model.bin` weights file is recommended. The h5 file for `Tensorflow` has been generated manually by commands suggested [here](https://huggingface.co/transformers/model_sharing.html).
|
neuralspace-reverie/indic-transformers-te-roberta | 2021-05-20T18:49:21.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"masked-lm",
"te",
"transformers",
"MaskedLM",
"Telugu",
"RoBERTa",
"Question-Answering",
"Token Classification",
"Text Classification",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
]
| neuralspace-reverie | 25 | transformers | ---
language:
- te
tags:
- MaskedLM
- Telugu
- RoBERTa
- Question-Answering
- Token Classification
- Text Classification
---
# Indic-Transformers Telugu RoBERTa
## Model description
This is a RoBERTa language model pre-trained on ~2 GB of monolingual training corpus. The pre-training data was majorly taken from [OSCAR](https://oscar-corpus.com/).
This model can be fine-tuned on various downstream tasks like text-classification, POS-tagging, question-answering, etc. Embeddings from this model can also be used for feature-based training.
## Intended uses & limitations
#### How to use
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('neuralspace-reverie/indic-transformers-te-roberta')
model = AutoModel.from_pretrained('neuralspace-reverie/indic-transformers-te-roberta')
text = "మీరు ఎలా ఉన్నారు"
input_ids = tokenizer(text, return_tensors='pt')['input_ids']
out = model(input_ids)[0]
print(out.shape)
# out = [1, 14, 768]
```
#### Limitations and bias
The original language model has been trained using `PyTorch` and hence the use of `pytorch_model.bin` weights file is recommended. The h5 file for `Tensorflow` has been generated manually by commands suggested [here](https://huggingface.co/transformers/model_sharing.html).
|
neuralspace-reverie/indic-transformers-te-xlmroberta | 2020-12-11T21:57:43.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"masked-lm",
"te",
"transformers",
"MaskedLM",
"Telugu",
"XLMRoBERTa",
"Question-Answering",
"Token Classification",
"Text Classification",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"sentencepiece.bpe.vocab",
"tf_model.h5"
]
| neuralspace-reverie | 16 | transformers | ---
language:
- te
tags:
- MaskedLM
- Telugu
- XLMRoBERTa
- Question-Answering
- Token Classification
- Text Classification
---
# Indic-Transformers Telugu XLMRoBERTa
## Model description
This is a XLMRoBERTa language model pre-trained on ~1.6 GB of monolingual training corpus. The pre-training data was majorly taken from [OSCAR](https://oscar-corpus.com/).
This model can be fine-tuned on various downstream tasks like text-classification, POS-tagging, question-answering, etc. Embeddings from this model can also be used for feature-based training.
## Intended uses & limitations
#### How to use
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('neuralspace-reverie/indic-transformers-te-xlmroberta')
model = AutoModel.from_pretrained('neuralspace-reverie/indic-transformers-te-xlmroberta')
text = "మీరు ఎలా ఉన్నారు"
input_ids = tokenizer(text, return_tensors='pt')['input_ids']
out = model(input_ids)[0]
print(out.shape)
# out = [1, 5, 768]
```
#### Limitations and bias
The original language model has been trained using `PyTorch` and hence the use of `pytorch_model.bin` weights file is recommended. The h5 file for `Tensorflow` has been generated manually by commands suggested [here](https://huggingface.co/transformers/model_sharing.html).
|
neuraly/bert-base-italian-cased-sentiment | 2021-05-20T01:38:06.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"it",
"transformers",
"sentiment",
"Italian",
"license:mit"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| neuraly | 1,590 | transformers | ---
language: it
thumbnail: https://neuraly.ai/static/assets/images/huggingface/thumbnail.png
tags:
- sentiment
- Italian
license: MIT
widget:
- text: "Huggingface è un team fantastico!"
---
# 🤗 + neuraly - Italian BERT Sentiment model
## Model description
This model performs sentiment analysis on Italian sentences. It was trained starting from an instance of [bert-base-italian-cased](https://huggingface.co/dbmdz/bert-base-italian-cased), and fine-tuned on an Italian dataset of tweets, reaching 82% of accuracy on the latter one.
## Intended uses & limitations
#### How to use
```python
import torch
from torch import nn
from transformers import AutoTokenizer, AutoModelForSequenceClassification
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("neuraly/bert-base-italian-cased-sentiment")
# Load the model, use .cuda() to load it on the GPU
model = AutoModelForSequenceClassification.from_pretrained("neuraly/bert-base-italian-cased-sentiment")
sentence = 'Huggingface è un team fantastico!'
input_ids = tokenizer.encode(sentence, add_special_tokens=True)
# Create tensor, use .cuda() to transfer the tensor to GPU
tensor = torch.tensor(input_ids).long()
# Fake batch dimension
tensor = tensor.unsqueeze(0)
# Call the model and get the logits
logits, = model(tensor)
# Remove the fake batch dimension
logits = logits.squeeze(0)
# The model was trained with a Log Likelyhood + Softmax combined loss, hence to extract probabilities we need a softmax on top of the logits tensor
proba = nn.functional.softmax(logits, dim=0)
# Unpack the tensor to obtain negative, neutral and positive probabilities
negative, neutral, positive = proba
```
#### Limitations and bias
A possible drawback (or bias) of this model is related to the fact that it was trained on a tweet dataset, with all the limitations that come with it. The domain is strongly related to football players and teams, but it works surprisingly well even on other topics.
## Training data
We trained the model by combining the two tweet datasets taken from [Sentipolc EVALITA 2016](http://www.di.unito.it/~tutreeb/sentipolc-evalita16/data.html). Overall the dataset consists of 45K pre-processed tweets.
The model weights come from a pre-trained instance of [bert-base-italian-cased](https://huggingface.co/dbmdz/bert-base-italian-cased). A huge "thank you" goes to that team, brilliant work!
## Training procedure
#### Preprocessing
We tried to save as much information as possible, since BERT captures extremely well the semantic of complex text sequences. Overall we removed only **@mentions**, **urls** and **emails** from every tweet and kept pretty much everything else.
#### Hardware
- **GPU**: Nvidia GTX1080ti
- **CPU**: AMD Ryzen7 3700x 8c/16t
- **RAM**: 64GB DDR4
#### Hyperparameters
- Optimizer: **AdamW** with learning rate of **2e-5**, epsilon of **1e-8**
- Max epochs: **5**
- Batch size: **32**
- Early Stopping: **enabled** with patience = 1
Early stopping was triggered after 3 epochs.
## Eval results
The model achieves an overall accuracy on the test set equal to 82%
The test set is a 20% split of the whole dataset.
## About us
[Neuraly](https://neuraly.ai) is a young and dynamic startup committed to designing AI-driven solutions and services through the most advanced Machine Learning and Data Science technologies. You can find out more about who we are and what we do on our [website](https://neuraly.ai).
## Acknowledgments
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download the model from their S3 storage and live test it from their inference API 🤗.
|
neurocode/Icelandic-NER-base | 2020-10-22T07:52:14.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"config.json",
"log_history.json",
"pytorch_model.bin",
"training_args.bin"
]
| neurocode | 18 | transformers | |
neurocode/Icelandic-NER-large | 2020-10-22T09:32:22.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"config.json",
"log_history.json",
"pytorch_model.bin",
"training_args.bin"
]
| neurocode | 33 | transformers | |
neurocode/IsRoBERTa | 2021-05-20T18:50:32.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"is",
"dataset:Icelandic portion of the OSCAR corpus from INRIA",
"dataset:oscar",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"log_history.json",
"merges.txt",
"pytorch_model.bin",
"training_args.bin",
"vocab.json"
]
| neurocode | 32 | transformers | ---
language: is
datasets:
- Icelandic portion of the OSCAR corpus from INRIA
- oscar
---
# IsRoBERTa a RoBERTa-like masked language model
Probably the first icelandic transformer language model!
## Overview
**Language:** Icelandic
**Downstream-task:** masked-lm
**Training data:** OSCAR corpus
**Code:** See [here](https://github.com/neurocode-io/icelandic-language-model)
**Infrastructure**: 1x Nvidia K80
## Hyperparameters
```
per_device_train_batch_size = 48
n_epochs = 1
vocab_size = 52.000
max_position_embeddings = 514
num_attention_heads = 12
num_hidden_layers = 6
type_vocab_size = 1
learning_rate=0.00005
```
## Usage
### In Transformers
```python
from transformers import (
pipeline,
AutoTokenizer,
AutoModelWithLMHead
)
model_name = "neurocode/IsRoBERTa"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelWithLMHead.from_pretrained(model_name)
>>> fill_mask = pipeline(
... "fill-mask",
... model=model,
... tokenizer=tokenizer
... )
>>> result = fill_mask("Hann fór út að <mask>.")
>>> result
[
{'sequence': '<s>Hann fór út að nýju.</s>', 'score': 0.03395755589008331, 'token': 2219, 'token_str': 'Ġnýju'},
{'sequence': '<s>Hann fór út að undanförnu.</s>', 'score': 0.029087543487548828, 'token': 7590, 'token_str': 'Ġundanförnu'},
{'sequence': '<s>Hann fór út að lokum.</s>', 'score': 0.024420788511633873, 'token': 4384, 'token_str': 'Ġlokum'},
{'sequence': '<s>Hann fór út að þessu.</s>', 'score': 0.021231256425380707, 'token': 921, 'token_str': 'Ġþessu'},
{'sequence': '<s>Hann fór út að honum.</s>', 'score': 0.0205782949924469, 'token': 1136, 'token_str': 'Ġhonum'}
]
```
## Authors
Bobby Donchev: `contact [at] donchev.is`
Elena Cramer: `elena.cramer [at] neurocode.io`
## About us
We bring AI software for our customers live
Our focus: AI software development
Get in touch:
[LinkedIn](https://de.linkedin.com/company/neurocodeio) | [Website](https://neurocode.io)
|
neuromusic/model_name | 2021-02-19T21:07:12.000Z | []
| [
".gitattributes"
]
| neuromusic | 0 | |||
neuropark/sahajBERT-NCC | 2021-06-15T12:40:08.000Z | [
"pytorch",
"albert",
"text-classification",
"bn",
"dataset:IndicGlue",
"transformers",
"collaborative",
"bengali",
"SequenceClassification",
"license:apache-2.0"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json"
]
| neuropark | 63 | transformers |
---
language: bn
tags:
- collaborative
- bengali
- SequenceClassification
license: apache-2.0
datasets: IndicGlue
metrics:
- Loss
- Accuracy
- Precision
- Recall
widget:
- text: "এশিয়ায় প্রথম দৃষ্টিহীন ব্যক্তির মাউন্ট এভারেস্ট জয়|"
---
# sahajBERT News Article Classification
## Model description
[sahajBERT](https://huggingface.co/neuropark/sahajBERT) fine-tuned for news article classification using the `sna.bn` split of [IndicGlue](https://huggingface.co/datasets/indic_glue).
The model is trained for classifying articles into 5 different classes:
| Label id | Label |
|:--------:|:----:|
|0 | kolkata|
|1 | state|
|2 | national|
|3 | sports|
|4 | entertainment|
|5 | international|
## Intended uses & limitations
#### How to use
You can use this model directly with a pipeline for Sequence Classification:
```python
from transformers import AlbertForSequenceClassification, TextClassificationPipeline, PreTrainedTokenizerFast
# Initialize tokenizer
tokenizer = PreTrainedTokenizerFast.from_pretrained("neuropark/sahajBERT-NCC")
# Initialize model
model = AlbertForSequenceClassification.from_pretrained("neuropark/sahajBERT-NCC")
# Initialize pipeline
pipeline = TextClassificationPipeline(tokenizer=tokenizer, model=model)
raw_text = "এই ইউনিয়নে ৩ টি মৌজা ও ১০ টি গ্রাম আছে ।" # Change me
output = pipeline(raw_text)
```
#### Limitations and bias
<!-- Provide examples of latent issues and potential remediations. -->
WIP
## Training data
The model was initialized with pre-trained weights of [sahajBERT](https://huggingface.co/neuropark/sahajBERT) at step 19519 and trained on the `sna.bn` split of [IndicGlue](https://huggingface.co/datasets/indic_glue).
## Training procedure
Coming soon!
<!-- ```bibtex
@inproceedings{...,
year={2020}
}
``` -->
## Eval results
Loss: 0.2477145493030548
Accuracy: 0.926293408929837
Macro F1: 0.9079785326650756
Recall: 0.926293408929837
Weighted F1: 0.9266428029354202
Macro Precision: 0.9109938492260489
Micro Precision: 0.926293408929837
Weighted Precision: 0.9288535478995414
Macro Recall: 0.9069095007692186
Micro Recall: 0.926293408929837
Weighted Recall: 0.926293408929837
### BibTeX entry and citation info
Coming soon!
<!-- ```bibtex
@inproceedings{...,
year={2020}
}
``` -->
|
neuropark/sahajBERT-NER | 2021-06-15T08:12:18.000Z | [
"pytorch",
"albert",
"token-classification",
"bn",
"dataset:xtreme",
"transformers",
"collaborative",
"bengali",
"NER",
"license:apache-2.0"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json"
]
| neuropark | 146 | transformers |
---
language: bn
tags:
- collaborative
- bengali
- NER
license: apache-2.0
datasets: xtreme
metrics:
- Loss
- Accuracy
- Precision
- Recall
---
# sahajBERT Named Entity Recognition
## Model description
[sahajBERT](https://huggingface.co/neuropark/sahajBERT-NER) fine-tuned for NER using the bengali split of [WikiANN ](https://huggingface.co/datasets/wikiann).
Named Entities predicted by the model:
| Label id | Label |
|:--------:|:----:|
|0 |O|
|1 |B-PER|
|2 |I-PER|
|3 |B-ORG|
|4 |I-ORG|
|5 |B-LOC|
|6 |I-LOC|
## Intended uses & limitations
#### How to use
You can use this model directly with a pipeline for token classification:
```python
from transformers import AlbertForTokenClassification, TokenClassificationPipeline, PreTrainedTokenizerFast
# Initialize tokenizer
tokenizer = PreTrainedTokenizerFast.from_pretrained("neuropark/sahajBERT-NER")
# Initialize model
model = AlbertForTokenClassification.from_pretrained("neuropark/sahajBERT-NER")
# Initialize pipeline
pipeline = TokenClassificationPipeline(tokenizer=tokenizer, model=model)
raw_text = "এই ইউনিয়নে ৩ টি মৌজা ও ১০ টি গ্রাম আছে ।" # Change me
output = pipeline(raw_text)
```
#### Limitations and bias
<!-- Provide examples of latent issues and potential remediations. -->
WIP
## Training data
The model was initialized with pre-trained weights of [sahajBERT](https://huggingface.co/neuropark/sahajBERT-NER) at step 19519 and trained on the bengali split of [WikiANN ](https://huggingface.co/datasets/wikiann)
## Training procedure
Coming soon!
<!-- ```bibtex
@inproceedings{...,
year={2020}
}
``` -->
## Eval results
loss: 0.11714419722557068
accuracy: 0.9772286821705426
precision: 0.9585365853658536
recall: 0.9651277013752456
f1 : 0.9618208516886931
### BibTeX entry and citation info
Coming soon!
<!-- ```bibtex
@inproceedings{...,
year={2020}
}
``` -->
|
neuropark/sahajBERT | 2021-06-15T12:35:55.000Z | [
"pytorch",
"albert",
"pretraining",
"bn",
"dataset:Wikipedia",
"dataset:Oscar",
"arxiv:1909.11942",
"transformers",
"collaborative",
"bengali",
"bangla",
"license:apache-2.0",
"fill-mask",
"pipeline_tag:fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"optimizer_state.pt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json"
]
| neuropark | 499 | transformers | ---
language: bn
tags:
- collaborative
- bengali
- albert
- bangla
license: apache-2.0
datasets:
- Wikipedia
- Oscar
widget:
- text: "জীবনে সবচেয়ে মূল্যবান জিনিস হচ্ছে [MASK]।"
pipeline_tag: fill-mask
---
<!-- TODO: change widget text -->
# sahajBERT
Collaboratively pre-trained model on Bengali language using masked language modeling (MLM) and Sentence Order Prediction (SOP) objectives.
## Model description
<!-- You can embed local or remote images using `` -->
sahajBERT is a model composed of 1) a tokenizer specially designed for Bengali and 2) an [ALBERT](https://arxiv.org/abs/1909.11942) architecture collaboratively pre-trained on a dump of Wikipedia in Bengali and the Bengali part of OSCAR.
<!-- Add more information about the collaborative training when we have time / preprint available -->
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering.
We trained our model on 2 of these downstream tasks: [sequence classification](https://huggingface.co/neuropark/sahajBERT-NCC) and [token classification](https://huggingface.co/neuropark/sahajBERT-NER)
#### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
from transformers import AlbertForMaskedLM, FillMaskPipeline, PreTrainedTokenizerFast
# Initialize tokenizer
tokenizer = PreTrainedTokenizerFast.from_pretrained("neuropark/sahajBERT")
# Initialize model
model = AlbertForMaskedLM.from_pretrained("neuropark/sahajBERT")
# Initialize pipeline
pipeline = FillMaskPipeline(tokenizer=tokenizer, model=model)
raw_text = "ধন্যবাদ। আপনার সাথে কথা [MASK] ভালো লাগলো" # Change me
pipeline(raw_text)
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertModel, PreTrainedTokenizerFast
# Initialize tokenizer
tokenizer = PreTrainedTokenizerFast.from_pretrained("neuropark/sahajBERT")
# Initialize model
model = AlbertModel.from_pretrained("neuropark/sahajBERT")
text = "ধন্যবাদ। আপনার সাথে কথা বলে ভালো লাগলো" # Change me
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
#### Limitations and bias
<!-- Provide examples of latent issues and potential remediations. -->
WIP
## Training data
The tokenizer was trained on he Bengali part of OSCAR and the model on a [dump of Wikipedia in Bengali](https://huggingface.co/datasets/lhoestq/wikipedia_bn) and the Bengali part of [OSCAR](https://huggingface.co/datasets/oscar).
## Training procedure
This model was trained in a collaborative manner by volunteer participants.
<!-- Add more information about the collaborative training when we have time / preprint available + Preprocessing, hardware used, hyperparameters... (maybe use figures)-->
### Contributors leaderboard
| Rank | Username | Total contributed runtime |
|:-------------:|:-------------:|-------------:|
| 1|[khalidsaifullaah](https://huggingface.co/khalidsaifullaah)|11 days 21:02:08|
| 2|[ishanbagchi](https://huggingface.co/ishanbagchi)|9 days 20:37:00|
| 3|[tanmoyio](https://huggingface.co/tanmoyio)|9 days 18:08:34|
| 4|[debajit](https://huggingface.co/debajit)|8 days 14:15:10|
| 5|[skylord](https://huggingface.co/skylord)|6 days 16:35:29|
| 6|[ibraheemmoosa](https://huggingface.co/ibraheemmoosa)|5 days 01:05:57|
| 7|[SaulLu](https://huggingface.co/SaulLu)|5 days 00:46:36|
| 8|[lhoestq](https://huggingface.co/lhoestq)|4 days 20:11:16|
| 9|[nilavya](https://huggingface.co/nilavya)|4 days 08:51:51|
|10|[Priyadarshan](https://huggingface.co/Priyadarshan)|4 days 02:28:55|
|11|[anuragshas](https://huggingface.co/anuragshas)|3 days 05:00:55|
|12|[sujitpal](https://huggingface.co/sujitpal)|2 days 20:52:33|
|13|[manandey](https://huggingface.co/manandey)|2 days 16:17:13|
|14|[albertvillanova](https://huggingface.co/albertvillanova)|2 days 14:14:31|
|15|[justheuristic](https://huggingface.co/justheuristic)|2 days 13:20:52|
|16|[w0lfw1tz](https://huggingface.co/w0lfw1tz)|2 days 07:22:48|
|17|[smoker](https://huggingface.co/smoker)|2 days 02:52:03|
|18|[Soumi](https://huggingface.co/Soumi)|1 days 20:42:02|
|19|[Anjali](https://huggingface.co/Anjali)|1 days 16:28:00|
|20|[OptimusPrime](https://huggingface.co/OptimusPrime)|1 days 09:16:57|
|21|[theainerd](https://huggingface.co/theainerd)|1 days 04:48:57|
|22|[yhn112](https://huggingface.co/yhn112)|0 days 20:57:02|
|23|[kolk](https://huggingface.co/kolk)|0 days 17:57:37|
|24|[arnab](https://huggingface.co/arnab)|0 days 17:54:12|
|25|[imavijit](https://huggingface.co/imavijit)|0 days 16:07:26|
|26|[osanseviero](https://huggingface.co/osanseviero)|0 days 14:16:45|
|27|[subhranilsarkar](https://huggingface.co/subhranilsarkar)|0 days 13:04:46|
|28|[sagnik1511](https://huggingface.co/sagnik1511)|0 days 12:24:57|
|29|[anindabitm](https://huggingface.co/anindabitm)|0 days 08:56:44|
|30|[borzunov](https://huggingface.co/borzunov)|0 days 04:07:35|
|31|[thomwolf](https://huggingface.co/thomwolf)|0 days 03:53:15|
|32|[priyadarshan](https://huggingface.co/priyadarshan)|0 days 03:40:11|
|33|[ali007](https://huggingface.co/ali007)|0 days 03:34:37|
|34|[sbrandeis](https://huggingface.co/sbrandeis)|0 days 03:18:16|
|35|[Preetha](https://huggingface.co/Preetha)|0 days 03:13:47|
|36|[Mrinal](https://huggingface.co/Mrinal)|0 days 03:01:43|
|37|[laxya007](https://huggingface.co/laxya007)|0 days 02:18:34|
|38|[lewtun](https://huggingface.co/lewtun)|0 days 00:34:43|
|39|[Rounak](https://huggingface.co/Rounak)|0 days 00:26:10|
|40|[kshmax](https://huggingface.co/kshmax)|0 days 00:06:38|
## Eval results
We evaluate sahajBERT model quality and 2 other model benchmarks ([XLM-R-large](https://huggingface.co/xlm-roberta-large) and [IndicBert](https://huggingface.co/ai4bharat/indic-bert)) by fine-tuning 3 times their pre-trained models on two downstream tasks in Bengali:
- **NER**: a named entity recognition on Bengali split of [WikiANN](https://huggingface.co/datasets/wikiann) dataset
- **NCC**: a multi-class classification task on news Soham News Category Classification dataset from IndicGLUE
| Base pre-trained Model | NER - F1 (mean ± std) | NCC - Accuracy (mean ± std) |
|:-------------:|:-------------:|:-------------:|
|sahajBERT | 95.45 ± 0.53| 91.97 ± 0.47|
|[XLM-R-large](https://huggingface.co/xlm-roberta-large) | 96.48 ± 0.22| 90.05 ± 0.38|
|[IndicBert](https://huggingface.co/ai4bharat/indic-bert) | 92.52 ± 0.45| 74.46 ± 1.91|
### BibTeX entry and citation info
Coming soon!
<!-- ```bibtex
@inproceedings{...,
year={2020}
}
``` --> |
newton/first | 2021-04-16T23:44:34.000Z | []
| [
".gitattributes"
]
| newton | 0 | |||
nfliu/scibert_basevocab_uncased | 2021-05-20T01:39:31.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"vocab.txt"
]
| nfliu | 8,499 | transformers | ||
nfliu/scibert_s2orc_test | 2021-05-20T22:16:34.000Z | [
"pytorch",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"vocab.txt"
]
| nfliu | 8 | transformers | ||
nghuyong/ernie-1.0 | 2021-05-20T01:40:40.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"zh",
"arxiv:1904.09223",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| nghuyong | 2,562 | transformers | ---
language: zh
---
# ERNIE-1.0
## Introduction
ERNIE (Enhanced Representation through kNowledge IntEgration) is proposed by Baidu in 2019,
which is designed to learn language representation enhanced by knowledge masking strategies i.e. entity-level masking and phrase-level masking.
Experimental results show that ERNIE achieve state-of-the-art results on five Chinese natural language processing tasks including natural language inference,
semantic similarity, named entity recognition, sentiment analysis and question answering.
More detail: https://arxiv.org/abs/1904.09223
## Released Model Info
|Model Name|Language|Model Structure|
|:---:|:---:|:---:|
|ernie-1.0| Chinese |Layer:12, Hidden:768, Heads:12|
This released pytorch model is converted from the officially released PaddlePaddle ERNIE model and
a series of experiments have been conducted to check the accuracy of the conversion.
- Official PaddlePaddle ERNIE repo: https://github.com/PaddlePaddle/ERNIE
- Pytorch Conversion repo: https://github.com/nghuyong/ERNIE-Pytorch
## How to use
```Python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-1.0")
model = AutoModel.from_pretrained("nghuyong/ernie-1.0")
```
## Citation
```bibtex
@article{sun2019ernie,
title={Ernie: Enhanced representation through knowledge integration},
author={Sun, Yu and Wang, Shuohuan and Li, Yukun and Feng, Shikun and Chen, Xuyi and Zhang, Han and Tian, Xin and Zhu, Danxiang and Tian, Hao and Wu, Hua},
journal={arXiv preprint arXiv:1904.09223},
year={2019}
}
```
|
|
nghuyong/ernie-2.0-en | 2021-05-20T01:42:24.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"en",
"arxiv:1907.12412",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| nghuyong | 7,697 | transformers | ---
language: en
---
# ERNIE-2.0
## Introduction
ERNIE 2.0 is a continual pre-training framework proposed by Baidu in 2019,
which builds and learns incrementally pre-training tasks through constant multi-task learning.
Experimental results demonstrate that ERNIE 2.0 outperforms BERT and XLNet on 16 tasks including English tasks on GLUE benchmarks and several common tasks in Chinese.
More detail: https://arxiv.org/abs/1907.12412
## Released Model Info
|Model Name|Language|Model Structure|
|:---:|:---:|:---:|
|ernie-2.0-en| English |Layer:12, Hidden:768, Heads:12|
This released pytorch model is converted from the officially released PaddlePaddle ERNIE model and
a series of experiments have been conducted to check the accuracy of the conversion.
- Official PaddlePaddle ERNIE repo: https://github.com/PaddlePaddle/ERNIE
- Pytorch Conversion repo: https://github.com/nghuyong/ERNIE-Pytorch
## How to use
```Python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-2.0-en")
model = AutoModel.from_pretrained("nghuyong/ernie-2.0-en")
```
## Citation
```bibtex
@article{sun2019ernie20,
title={ERNIE 2.0: A Continual Pre-training Framework for Language Understanding},
author={Sun, Yu and Wang, Shuohuan and Li, Yukun and Feng, Shikun and Tian, Hao and Wu, Hua and Wang, Haifeng},
journal={arXiv preprint arXiv:1907.12412},
year={2019}
}
```
|
|
nghuyong/ernie-2.0-large-en | 2021-05-20T01:45:21.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"arxiv:1907.12412",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| nghuyong | 1,339 | transformers | # ERNIE-2.0-large
## Introduction
ERNIE 2.0 is a continual pre-training framework proposed by Baidu in 2019,
which builds and learns incrementally pre-training tasks through constant multi-task learning.
Experimental results demonstrate that ERNIE 2.0 outperforms BERT and XLNet on 16 tasks including English tasks on GLUE benchmarks and several common tasks in Chinese.
More detail: https://arxiv.org/abs/1907.12412
## Released Model Info
|Model Name|Language|Model Structure|
|:---:|:---:|:---:|
|ernie-2.0-large-en| English |Layer:24, Hidden:1024, Heads:16|
This released pytorch model is converted from the officially released PaddlePaddle ERNIE model and
a series of experiments have been conducted to check the accuracy of the conversion.
- Official PaddlePaddle ERNIE repo: https://github.com/PaddlePaddle/ERNIE
- Pytorch Conversion repo: https://github.com/nghuyong/ERNIE-Pytorch
## How to use
```Python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-2.0-large-en")
model = AutoModel.from_pretrained("nghuyong/ernie-2.0-large-en")
```
## Citation
```bibtex
@article{sun2019ernie20,
title={ERNIE 2.0: A Continual Pre-training Framework for Language Understanding},
author={Sun, Yu and Wang, Shuohuan and Li, Yukun and Feng, Shikun and Tian, Hao and Wu, Hua and Wang, Haifeng},
journal={arXiv preprint arXiv:1907.12412},
year={2019}
}
```
|
|
nghuyong/ernie-tiny | 2021-05-20T01:47:09.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"en",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| nghuyong | 75 | transformers | ---
language: en
---
# ERNIE-tiny
## Introduction
ERNIE-tiny is a compressed model from [ERNIE 2.0](../ernie-2.0-en) base model through model structure compression and model distillation.
Through compression, the performance of the ERNIE-tiny only decreases by an average of 2.37% compared to ERNIE 2.0 base,
but it outperforms Google BERT by 8.35%, and the speed increases by 4.3 times.
More details: https://github.com/PaddlePaddle/ERNIE/blob/develop/distill/README.md
## Released Model Info
|Model Name|Language|Model Structure|
|:---:|:---:|:---:|
|ernie-tiny| English |Layer:3, Hidden:1024, Heads:16|
This released pytorch model is converted from the officially released PaddlePaddle ERNIE model and
a series of experiments have been conducted to check the accuracy of the conversion.
- Official PaddlePaddle ERNIE repo: https://github.com/PaddlePaddle/ERNIE
- Pytorch Conversion repo: https://github.com/nghuyong/ERNIE-Pytorch
## How to use
```Python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-tiny")
model = AutoModel.from_pretrained("nghuyong/ernie-tiny")
```
## Citation
```bibtex
@article{sun2019ernie20,
title={ERNIE 2.0: A Continual Pre-training Framework for Language Understanding},
author={Sun, Yu and Wang, Shuohuan and Li, Yukun and Feng, Shikun and Tian, Hao and Wu, Hua and Wang, Haifeng},
journal={arXiv preprint arXiv:1907.12412},
year={2019}
}
```
|
|
nguyenthanhasia/BERTLaw | 2021-05-20T01:48:06.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"pretraining",
"transformers"
]
| [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| nguyenthanhasia | 10 | transformers | ||
nguyenthanhasia/VNBertLaw | 2021-05-20T01:49:05.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| nguyenthanhasia | 7 | transformers | This is Vietnamese Bert Law
|
|
nicocesar/helloworld | 2021-03-31T17:32:39.000Z | []
| [
".gitattributes",
"README.md"
]
| nicocesar | 0 | |||
nicosi/test | 2021-04-15T11:10:37.000Z | []
| [
".gitattributes"
]
| nicosi | 0 | |||
nielsr/canine-s | 2021-06-05T09:33:04.000Z | [
"pytorch",
"canine",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json"
]
| nielsr | 183 | transformers | ||
nielsr/coref-bert-base | 2021-01-21T10:06:00.000Z | [
"pytorch",
"en",
"dataset:wikipedia",
"dataset:quoref",
"dataset:docred",
"dataset:fever",
"dataset:gap",
"dataset:winograd_wsc",
"dataset:winogender",
"dataset:glue",
"arxiv:2004.06870",
"transformers",
"exbert",
"license:apache-2.0"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"vocab.txt"
]
| nielsr | 10 | transformers | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- wikipedia
- quoref
- docred
- fever
- gap
- winograd_wsc
- winogender
- glue
---
# CorefBERTa base model
Pretrained model on English language using Masked Language Modeling (MLM) and Mention Reference Prediction (MRP) objectives. It was introduced in
[this paper](https://arxiv.org/abs/2004.06870) and first released in
[this repository](https://github.com/thunlp/CorefBERT).
Disclaimer: The team releasing CorefBERT did not write a model card for this model so this model card has been written by me.
## Model description
CorefBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Mention reference prediction (MRP): this is a novel training task which is proposed to enhance coreferential reasoning ability. MRP utilizes the
mention reference masking strategy to mask one of the repeated mentions and then employs a copybased training objective to predict the masked tokens by copying from other tokens in the sequence.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks, especially those that involve coreference resolution. If you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the CorefBERT model as inputs.
### BibTeX entry and citation info
```bibtex
@misc{ye2020coreferential,
title={Coreferential Reasoning Learning for Language Representation},
author={Deming Ye and Yankai Lin and Jiaju Du and Zhenghao Liu and Peng Li and Maosong Sun and Zhiyuan Liu},
year={2020},
eprint={2004.06870},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
|
nielsr/coref-bert-large | 2021-01-21T10:06:48.000Z | [
"pytorch",
"en",
"dataset:wikipedia",
"dataset:quoref",
"dataset:docred",
"dataset:fever",
"dataset:gap",
"dataset:winograd_wsc",
"dataset:winogender",
"dataset:glue",
"arxiv:2004.06870",
"transformers",
"exbert",
"license:apache-2.0"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"vocab.txt"
]
| nielsr | 8 | transformers | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- wikipedia
- quoref
- docred
- fever
- gap
- winograd_wsc
- winogender
- glue
---
# CorefBERT large model
Pretrained model on English language using Masked Language Modeling (MLM) and Mention Reference Prediction (MRP) objectives. It was introduced in
[this paper](https://arxiv.org/abs/2004.06870) and first released in
[this repository](https://github.com/thunlp/CorefBERT).
Disclaimer: The team releasing CorefBERT did not write a model card for this model so this model card has been written by me.
## Model description
CorefBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Mention reference prediction (MRP): this is a novel training task which is proposed to enhance coreferential reasoning ability. MRP utilizes the
mention reference masking strategy to mask one of the repeated mentions and then employs a copybased training objective to predict the masked tokens by copying from other tokens in the sequence.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks, especially those that involve coreference resolution. If you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the CorefBERT model as inputs.
### BibTeX entry and citation info
```bibtex
@misc{ye2020coreferential,
title={Coreferential Reasoning Learning for Language Representation},
author={Deming Ye and Yankai Lin and Jiaju Du and Zhenghao Liu and Peng Li and Maosong Sun and Zhiyuan Liu},
year={2020},
eprint={2004.06870},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
|
nielsr/coref-roberta-base | 2021-01-21T08:18:55.000Z | [
"pytorch",
"en",
"dataset:wikipedia",
"dataset:quoref",
"dataset:docred",
"dataset:fever",
"dataset:gap",
"dataset:winograd_wsc",
"dataset:winogender",
"dataset:glue",
"arxiv:2004.06870",
"transformers",
"exbert",
"license:apache-2.0"
]
| [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"vocab.json"
]
| nielsr | 35 | transformers | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- wikipedia
- quoref
- docred
- fever
- gap
- winograd_wsc
- winogender
- glue
---
# CorefRoBERTa base model
Pretrained model on English language using Masked Language Modeling (MLM) and Mention Reference Prediction (MRP) objectives. It was introduced in
[this paper](https://arxiv.org/abs/2004.06870) and first released in
[this repository](https://github.com/thunlp/CorefBERT).
Disclaimer: The team releasing CorefRoBERTa did not write a model card for this model so this model card has been written by me.
## Model description
CorefRoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Mention reference prediction (MRP): this is a novel training task which is proposed to enhance coreferential reasoning ability. MRP utilizes the
mention reference masking strategy to mask one of the repeated mentions and then employs a copybased training objective to predict the masked tokens by copying from other tokens in the sequence.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks, especially those that involve coreference resolution. If you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the CorefRoBERTa model as inputs.
### BibTeX entry and citation info
```bibtex
@misc{ye2020coreferential,
title={Coreferential Reasoning Learning for Language Representation},
author={Deming Ye and Yankai Lin and Jiaju Du and Zhenghao Liu and Peng Li and Maosong Sun and Zhiyuan Liu},
year={2020},
eprint={2004.06870},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
|
nielsr/coref-roberta-large | 2021-01-21T10:07:15.000Z | [
"pytorch",
"en",
"dataset:wikipedia",
"dataset:quoref",
"dataset:docred",
"dataset:fever",
"dataset:gap",
"dataset:winograd_wsc",
"dataset:winogender",
"dataset:glue",
"arxiv:2004.06870",
"transformers",
"exbert",
"license:apache-2.0"
]
| [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"vocab.json"
]
| nielsr | 13 | transformers | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- wikipedia
- quoref
- docred
- fever
- gap
- winograd_wsc
- winogender
- glue
---
# CorefRoBERTa large model
Pretrained model on English language using Masked Language Modeling (MLM) and Mention Reference Prediction (MRP) objectives. It was introduced in
[this paper](https://arxiv.org/abs/2004.06870) and first released in
[this repository](https://github.com/thunlp/CorefBERT).
Disclaimer: The team releasing CorefRoBERTa did not write a model card for this model so this model card has been written by me.
## Model description
CorefRoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Mention reference prediction (MRP): this is a novel training task which is proposed to enhance coreferential reasoning ability. MRP utilizes the
mention reference masking strategy to mask one of the repeated mentions and then employs a copybased training objective to predict the masked tokens by copying from other tokens in the sequence.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks, especially those that involve coreference resolution. If you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the CorefRoBERTa model as inputs.
### BibTeX entry and citation info
```bibtex
@misc{ye2020coreferential,
title={Coreferential Reasoning Learning for Language Representation},
author={Deming Ye and Yankai Lin and Jiaju Du and Zhenghao Liu and Peng Li and Maosong Sun and Zhiyuan Liu},
year={2020},
eprint={2004.06870},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
|
nielsr/detr-resnet-50-new | 2021-02-09T10:27:09.000Z | [
"pytorch",
"detr",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| nielsr | 48 | transformers | ||
nielsr/detr-resnet-50 | 2021-06-08T13:56:54.000Z | [
"pytorch",
"detr",
"transformers"
]
| [
".gitattributes",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin"
]
| nielsr | 56 | transformers | ||
nielsr/detr-testje | 2021-04-28T06:42:48.000Z | [
"pytorch",
"detr",
"transformers"
]
| [
".gitattributes",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin"
]
| nielsr | 39 | transformers | ||
nielsr/dino_deits8 | 2021-05-03T08:17:02.000Z | [
"pytorch",
"vit",
"transformers"
]
| [
".gitattributes",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin"
]
| nielsr | 15 | transformers | ||
nielsr/dino_vitb16 | 2021-05-02T18:20:04.000Z | [
"pytorch",
"vit",
"transformers"
]
| [
".gitattributes",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin"
]
| nielsr | 8 | transformers | ||
nielsr/dino_vitb8 | 2021-05-03T08:00:43.000Z | [
"pytorch",
"vit",
"transformers"
]
| [
".gitattributes",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin"
]
| nielsr | 6 | transformers | ||
nielsr/luke-large | 2021-02-18T15:04:30.000Z | [
"pytorch",
"luke",
"transformers"
]
| [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| nielsr | 11 | transformers | ||
nielsr/nt5-small-rc1 | 2021-05-27T12:37:21.000Z | [
"pytorch",
"t5",
"seq2seq",
"dataset:drop",
"arxiv:2104.07307",
"arxiv:1903.00161",
"transformers",
"license:apache-2.0",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer.json",
"tokenizer_config.json"
]
| nielsr | 85 | transformers | ---
license: apache-2.0
tags:
datasets:
- drop
---
# NT5, a T5 model trained to perform numerical reasoning
T5-small model pre-trained on 3 million (partly synthetic) texts and fine-tuned on [DROP](https://allennlp.org/drop.html). It was introduced in the paper [NT5?! Training T5 to Perform Numerical Reasoning](https://arxiv.org/abs/2104.07307) by Yang et al. and first released in [this repository](https://github.com/lesterpjy/numeric-t5). As the original implementation was in Tensorflow 2, I've converted the weigths to PyTorch. This model corresponds to RC Experiment 1 (see the paper), their best performing model.
Disclaimer: The team releasing NT5 did not write a model card for this model so this model card has been written by me.
## Model description
The NT5 model is a T5 model, in other words, an encoder-decoder Transformer. In order to encourage numerical reasoning, the model was further pre-trained on three datasets designed to strengthen skills necessary for numerical reasoning over text (NRoT) and general reading comprehension before being fine-tuned on the Discrete Reasoning over Text (DROP) dataset.
## Intended uses & limitations
You can use the model for numerical reasoning over text.
### How to use
Here is how to use this model:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
context = """Saint Jean de Brébeuf was a French Jesuit missionary who
travelled to New France in 1625. There he worked primarily with the Huron
for the rest of his life, except for a few years in France from 1629 to
1633. He learned their language and culture, writing extensively about
each to aid other missionaries. In 1649, Br´ebeuf and another missionary
were captured when an Iroquois raid took over a Huron village . Together
with Huron captives, the missionaries were ritually tortured and killed
on March 16, 1649. Br´ebeuf was beatified in 1925 and among eight Jesuit
missionaries canonized as saints in the Roman Catholic Church in 1930."""
question = "How many years did Saint Jean de Brébeuf stay in New France
before he went back to France for a few years?"
tokenizer = T5Tokenizer.from_pretrained("nielsr/nt5-small-rc1")
model = T5ForConditionalGeneration.from_pretrained("nielsr/nt5-small-rc1")
# encode context & question
input_text = f"answer_me: {question} context: {context}"
encoded_query = tokenizer(
input_text,
return_tensors='pt',
padding='max_length',
truncation=True,
max_length=512)
# generate answer
generated_answer = model.generate(input_ids=encoded_query["input_ids"],
attention_mask=encoded_query["attention_mask"],
max_length=54)
decoded_answer = tokenizer.decode(generated_answer.numpy()[0])
print("T5 Answer: ", decoded_answer)
T5 Answer: 4
```
## Evaluation results
This model achieves an F1 score of 0.7031 and exact match of 0.6687 on the development set of DROP.
### BibTeX entry and citation info
```bibtex
@misc{yang2021nt5,
title={NT5?! Training T5 to Perform Numerical Reasoning},
author={Peng-Jian Yang and Ying Ting Chen and Yuechan Chen and Daniel Cer},
year={2021},
eprint={2104.07307},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@article{DBLP:journals/corr/abs-1903-00161,
author = {Dheeru Dua and
Yizhong Wang and
Pradeep Dasigi and
Gabriel Stanovsky and
Sameer Singh and
Matt Gardner},
title = {{DROP:} {A} Reading Comprehension Benchmark Requiring Discrete Reasoning
Over Paragraphs},
journal = {CoRR},
volume = {abs/1903.00161},
year = {2019},
url = {http://arxiv.org/abs/1903.00161},
archivePrefix = {arXiv},
eprint = {1903.00161},
timestamp = {Wed, 03 Jul 2019 07:17:04 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1903-00161.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
a service of Schloss Dagstuhl - Leibniz Center for Informatics\\\\thomebrowsesearchabout
``` |
nielsr/tapas-base | 2020-12-11T11:12:17.000Z | [
"pytorch",
"tapas",
"en",
"arxiv:2004.02349",
"arxiv:2010.00571",
"transformers",
"sequence-classification",
"license:apache-2.0"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| nielsr | 13 | transformers | ---
language: en
tags:
- tapas
- sequence-classification
license: apache-2.0
---
# TAPAS base model
This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_inter_masklm_base_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training. It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is the one with absolute position embeddings:
- `revision="v1"`, which corresponds to `tapas_inter_masklm_base`
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding one or more classification heads on top of the pre-trained model, and then
jointly train these randomly initialized classification heads with the base model on a downstream task.
## Intended uses & limitations
You can use the raw model for getting hidden representatons about table-question pairs, but it's mostly intended to be fine-tuned on a downstream task such as question answering or sequence classification. See the [model hub](https://huggingface.co/models?filter=tapas) to look for fine-tuned versions on a task that interests you.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence [SEP] Flattened table [SEP]
```
### Pre-training
The model was pre-trained on 32 Cloud TPU v3 cores for 1,000,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, pre-training on MLM only takes around 3 days. Aditionally, the model has been further pre-trained on a second task (table entailment). See the original TAPAS [paper](https://www.aclweb.org/anthology/2020.acl-main.398/) and the [follow-up paper](https://www.aclweb.org/anthology/2020.findings-emnlp.27/) for more details.
The optimizer used is Adam with a learning rate of 5e-5, and a warmup
ratio of 0.01.
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
|
nielsr/vit-base-patch16-224 | 2021-03-24T07:36:09.000Z | [
"pytorch",
"vit",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| nielsr | 66 | transformers | ||
nikhilnagaraj/german_gpt_small | 2021-05-23T10:49:10.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| nikhilnagaraj | 13 | transformers | |
nikk/gpt | 2021-04-14T21:50:36.000Z | []
| [
".gitattributes"
]
| nikk | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.