File size: 1,412 Bytes
62d79d4 277966c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
---
language: "pt"
thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/logo-biobertpr1.png"
---
<img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/logo-biobertpr1.png" alt="Logo BioBERTpt">
# BioBERTpt - Portuguese Clinical and Biomedical BERT
The [BioBERTpt - A Portuguese Neural Language Model for Clinical Named Entity Recognition](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/) paper contains clinical and biomedical BERT-based models for Portuguese Language, initialized with BERT-Multilingual-Cased & trained on clinical notes and biomedical literature.
This model card describes the BioBERTpt(bio) model, a biomedical version of BioBERTpt, trained on Portuguese biomedical literature from scientific papers from Pubmed and Scielo.
## How to use the model
Load the model via the transformers library:
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("pucpr/biobertpt-bio")
model = AutoModel.from_pretrained("pucpr/biobertpt-bio")
```
## More Information
Refer to the original paper, [BioBERTpt - A Portuguese Neural Language Model for Clinical Named Entity Recognition](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/) for additional details and performance on Portuguese NER tasks.
## Questions?
Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt). |