Detsutut commited on
Commit
8f2c408
·
1 Parent(s): 72e49c5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -32,6 +32,7 @@ Here we host public weights for our biomedical language models. There are severa
32
  | [BioBIT](https://huggingface.co/bmi-labmedinfo/bioBIT) <sup>⸸</sup>| Biomedical | MaskedLM Pretrain | Trained after [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) with 28GB Pubmed abstracts (as in BioBERT) that have been translated from English into Italian using Google Neural Machine Translation (GNMT). |
33
  | [MedBIT](https://huggingface.co/bmi-labmedinfo/medBIT) <sup>⸸</sup>| Medical | MaskedLM Pretrain | Trained after [BioBIT](https://huggingface.co/bmi-labmedinfo/bioBIT) with additional 100MB of medical textbook data without any regularization. |
34
  | [MedBIT-R3+](https://huggingface.co/bmi-labmedinfo/medBIT-r3-plus) (recommended) <sup>⸸</sup>| Medical | MaskedLM Pretrain | Trained after [BioBIT](https://huggingface.co/bmi-labmedinfo/bioBIT) with additional 200MB of medical textbook data and web-crawled medical resources in Italian. Regularized with LLRD (.95), Mixout (.9), and Warmup (.02). |
 
35
  <sup>⸸</sup> <small>model developed in partnership with the [Neuroinformatics Lab](https://www.fatebenefratelli.it/it/ricerca_irccs-brescia#tab-15) of IRCCS Centro San Giovanni di Dio Fatebenefratelli, Brescia, Italy</small>
36
 
37
 
 
32
  | [BioBIT](https://huggingface.co/bmi-labmedinfo/bioBIT) <sup>⸸</sup>| Biomedical | MaskedLM Pretrain | Trained after [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) with 28GB Pubmed abstracts (as in BioBERT) that have been translated from English into Italian using Google Neural Machine Translation (GNMT). |
33
  | [MedBIT](https://huggingface.co/bmi-labmedinfo/medBIT) <sup>⸸</sup>| Medical | MaskedLM Pretrain | Trained after [BioBIT](https://huggingface.co/bmi-labmedinfo/bioBIT) with additional 100MB of medical textbook data without any regularization. |
34
  | [MedBIT-R3+](https://huggingface.co/bmi-labmedinfo/medBIT-r3-plus) (recommended) <sup>⸸</sup>| Medical | MaskedLM Pretrain | Trained after [BioBIT](https://huggingface.co/bmi-labmedinfo/bioBIT) with additional 200MB of medical textbook data and web-crawled medical resources in Italian. Regularized with LLRD (.95), Mixout (.9), and Warmup (.02). |
35
+
36
  <sup>⸸</sup> <small>model developed in partnership with the [Neuroinformatics Lab](https://www.fatebenefratelli.it/it/ricerca_irccs-brescia#tab-15) of IRCCS Centro San Giovanni di Dio Fatebenefratelli, Brescia, Italy</small>
37
 
38