Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
token-classification
flair
## English NER in Flair (large model) This is the large 4-class NER model for English that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **94,36** (corrected CoNLL-03) Predicts 4 tags: | **tag** | **meaning** | |---------------------------------|-----------| | PER | person name | | LOC | location name | | ORG | organization name | | MISC | other name | Based on document-level XLM-R embeddings and [FLERT](https://arxiv.org/pdf/2011.06993v1.pdf/). --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-english-large") # make example sentence sentence = Sentence("George Washington went to Washington") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [1,2]: "George Washington" [− Labels: PER (1.0)] Span [5]: "Washington" [− Labels: LOC (1.0)] ``` So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington went to Washington*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python import torch # 1. get the corpus from flair.datasets import CONLL_03 corpus = CONLL_03() # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize fine-tuneable transformer embeddings WITH document context from flair.embeddings import TransformerWordEmbeddings embeddings = TransformerWordEmbeddings( model='xlm-roberta-large', layers="-1", subtoken_pooling="first", fine_tune=True, use_context=True, ) # 5. initialize bare-bones sequence tagger (no CRF, no RNN, no reprojection) from flair.models import SequenceTagger tagger = SequenceTagger( hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type='ner', use_crf=False, use_rnn=False, reproject_embeddings=False, ) # 6. initialize trainer with AdamW optimizer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW) # 7. run training with XLM parameters (20 epochs, small LR) from torch.optim.lr_scheduler import OneCycleLR trainer.train('resources/taggers/ner-english-large', learning_rate=5.0e-6, mini_batch_size=4, mini_batch_chunk_size=1, max_epochs=20, scheduler=OneCycleLR, embeddings_storage_mode='none', weight_decay=0., ) ) ``` --- ### Cite Please cite the following paper when using this model. ``` @misc{schweter2020flert, title={FLERT: Document-Level Features for Named Entity Recognition}, author={Stefan Schweter and Alan Akbik}, year={2020}, eprint={2011.06993}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
{"language": "en", "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["conll2003"], "widget": [{"text": "George Washington went to Washington"}]}
flair/ner-english-large
null
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "dataset:conll2003", "arxiv:2011.06993", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
flair
## English NER in Flair (Ontonotes fast model) This is the fast version of the 18-class NER model for English that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **89.3** (Ontonotes) Predicts 18 tags: | **tag** | **meaning** | |---------------------------------|-----------| | CARDINAL | cardinal value | | DATE | date value | | EVENT | event name | | FAC | building name | | GPE | geo-political entity | | LANGUAGE | language name | | LAW | law name | | LOC | location name | | MONEY | money name | | NORP | affiliation | | ORDINAL | ordinal value | | ORG | organization name | | PERCENT | percent value | | PERSON | person name | | PRODUCT | product name | | QUANTITY | quantity value | | TIME | time value | | WORK_OF_ART | name of work of art | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-english-ontonotes-fast") # make example sentence sentence = Sentence("On September 1st George Washington won 1 dollar.") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [2,3]: "September 1st" [− Labels: DATE (0.9655)] Span [4,5]: "George Washington" [− Labels: PERSON (0.8243)] Span [7,8]: "1 dollar" [− Labels: MONEY (0.8022)] ``` So, the entities "*September 1st*" (labeled as a **date**), "*George Washington*" (labeled as a **person**) and "*1 dollar*" (labeled as a **money**) are found in the sentence "*On September 1st George Washington won 1 dollar*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import ColumnCorpus from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. load the corpus (Ontonotes does not ship with Flair, you need to download and reformat into a column format yourself) corpus: Corpus = ColumnCorpus( "resources/tasks/onto-ner", column_format={0: "text", 1: "pos", 2: "upos", 3: "ner"}, tag_to_bioes="ner", ) # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # GloVe embeddings WordEmbeddings('en-crawl'), # contextual string embeddings, forward FlairEmbeddings('news-forward-fast'), # contextual string embeddings, backward FlairEmbeddings('news-backward-fast'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/ner-english-ontonotes-fast', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
{"language": "en", "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["ontonotes"], "widget": [{"text": "On September 1st George Washington won 1 dollar."}]}
flair/ner-english-ontonotes-fast
null
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "dataset:ontonotes", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
flair
## English NER in Flair (Ontonotes large model) This is the large 18-class NER model for English that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **90.93** (Ontonotes) Predicts 18 tags: | **tag** | **meaning** | |---------------------------------|-----------| | CARDINAL | cardinal value | | DATE | date value | | EVENT | event name | | FAC | building name | | GPE | geo-political entity | | LANGUAGE | language name | | LAW | law name | | LOC | location name | | MONEY | money name | | NORP | affiliation | | ORDINAL | ordinal value | | ORG | organization name | | PERCENT | percent value | | PERSON | person name | | PRODUCT | product name | | QUANTITY | quantity value | | TIME | time value | | WORK_OF_ART | name of work of art | Based on document-level XLM-R embeddings and [FLERT](https://arxiv.org/pdf/2011.06993v1.pdf/). --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-english-ontonotes-large") # make example sentence sentence = Sentence("On September 1st George won 1 dollar while watching Game of Thrones.") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [2,3]: "September 1st" [− Labels: DATE (1.0)] Span [4]: "George" [− Labels: PERSON (1.0)] Span [6,7]: "1 dollar" [− Labels: MONEY (1.0)] Span [10,11,12]: "Game of Thrones" [− Labels: WORK_OF_ART (1.0)] ``` So, the entities "*September 1st*" (labeled as a **date**), "*George*" (labeled as a **person**), "*1 dollar*" (labeled as a **money**) and "Game of Thrones" (labeled as a **work of art**) are found in the sentence "*On September 1st George Washington won 1 dollar while watching Game of Thrones*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import ColumnCorpus from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. load the corpus (Ontonotes does not ship with Flair, you need to download and reformat into a column format yourself) corpus: Corpus = ColumnCorpus( "resources/tasks/onto-ner", column_format={0: "text", 1: "pos", 2: "upos", 3: "ner"}, tag_to_bioes="ner", ) # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize fine-tuneable transformer embeddings WITH document context from flair.embeddings import TransformerWordEmbeddings embeddings = TransformerWordEmbeddings( model='xlm-roberta-large', layers="-1", subtoken_pooling="first", fine_tune=True, use_context=True, ) # 5. initialize bare-bones sequence tagger (no CRF, no RNN, no reprojection) from flair.models import SequenceTagger tagger = SequenceTagger( hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type='ner', use_crf=False, use_rnn=False, reproject_embeddings=False, ) # 6. initialize trainer with AdamW optimizer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW) # 7. run training with XLM parameters (20 epochs, small LR) from torch.optim.lr_scheduler import OneCycleLR trainer.train('resources/taggers/ner-english-ontonotes-large', learning_rate=5.0e-6, mini_batch_size=4, mini_batch_chunk_size=1, max_epochs=20, scheduler=OneCycleLR, embeddings_storage_mode='none', weight_decay=0., ) ``` --- ### Cite Please cite the following paper when using this model. ``` @misc{schweter2020flert, title={FLERT: Document-Level Features for Named Entity Recognition}, author={Stefan Schweter and Alan Akbik}, year={2020}, eprint={2011.06993}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
{"language": "en", "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["ontonotes"], "widget": [{"text": "On September 1st George won 1 dollar while watching Game of Thrones."}]}
flair/ner-english-ontonotes-large
null
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "dataset:ontonotes", "arxiv:2011.06993", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
flair
## English NER in Flair (Ontonotes default model) This is the 18-class NER model for English that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **89.27** (Ontonotes) Predicts 18 tags: | **tag** | **meaning** | |---------------------------------|-----------| | CARDINAL | cardinal value | | DATE | date value | | EVENT | event name | | FAC | building name | | GPE | geo-political entity | | LANGUAGE | language name | | LAW | law name | | LOC | location name | | MONEY | money name | | NORP | affiliation | | ORDINAL | ordinal value | | ORG | organization name | | PERCENT | percent value | | PERSON | person name | | PRODUCT | product name | | QUANTITY | quantity value | | TIME | time value | | WORK_OF_ART | name of work of art | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-english-ontonotes") # make example sentence sentence = Sentence("On September 1st George Washington won 1 dollar.") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [2,3]: "September 1st" [− Labels: DATE (0.8824)] Span [4,5]: "George Washington" [− Labels: PERSON (0.9604)] Span [7,8]: "1 dollar" [− Labels: MONEY (0.9837)] ``` So, the entities "*September 1st*" (labeled as a **date**), "*George Washington*" (labeled as a **person**) and "*1 dollar*" (labeled as a **money**) are found in the sentence "*On September 1st George Washington won 1 dollar*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import ColumnCorpus from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. load the corpus (Ontonotes does not ship with Flair, you need to download and reformat into a column format yourself) corpus: Corpus = ColumnCorpus( "resources/tasks/onto-ner", column_format={0: "text", 1: "pos", 2: "upos", 3: "ner"}, tag_to_bioes="ner", ) # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # GloVe embeddings WordEmbeddings('en-crawl'), # contextual string embeddings, forward FlairEmbeddings('news-forward'), # contextual string embeddings, backward FlairEmbeddings('news-backward'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/ner-english-ontonotes', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
{"language": "en", "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["ontonotes"], "widget": [{"text": "On September 1st George Washington won 1 dollar."}]}
flair/ner-english-ontonotes
null
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "dataset:ontonotes", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
flair
## English NER in Flair (default model) This is the standard 4-class NER model for English that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **93,06** (corrected CoNLL-03) Predicts 4 tags: | **tag** | **meaning** | |---------------------------------|-----------| | PER | person name | | LOC | location name | | ORG | organization name | | MISC | other name | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-english") # make example sentence sentence = Sentence("George Washington went to Washington") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [1,2]: "George Washington" [− Labels: PER (0.9968)] Span [5]: "Washington" [− Labels: LOC (0.9994)] ``` So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington went to Washington*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import CONLL_03 from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. get the corpus corpus: Corpus = CONLL_03() # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # GloVe embeddings WordEmbeddings('glove'), # contextual string embeddings, forward FlairEmbeddings('news-forward'), # contextual string embeddings, backward FlairEmbeddings('news-backward'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/ner-english', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
{"language": "en", "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["conll2003"], "widget": [{"text": "George Washington went to Washington"}]}
flair/ner-english
null
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "dataset:conll2003", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
flair
## French NER in Flair (default model) This is the standard 4-class NER model for French that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **90,61** (WikiNER) Predicts 4 tags: | **tag** | **meaning** | |---------------------------------|-----------| | PER | person name | | LOC | location name | | ORG | organization name | | MISC | other name | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-french") # make example sentence sentence = Sentence("George Washington est allé à Washington") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [1,2]: "George Washington" [− Labels: PER (0.7394)] Span [6]: "Washington" [− Labels: LOC (0.9161)] ``` So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington est allé à Washington*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import WIKINER_FRENCH from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. get the corpus corpus: Corpus = WIKINER_FRENCH() # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # GloVe embeddings WordEmbeddings('fr'), # contextual string embeddings, forward FlairEmbeddings('fr-forward'), # contextual string embeddings, backward FlairEmbeddings('fr-backward'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/ner-french', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
{"language": "fr", "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["conll2003"], "widget": [{"text": "George Washington est all\u00e9 \u00e0 Washington"}]}
flair/ner-french
null
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fr", "dataset:conll2003", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
flair
## German NER in Flair (large model) This is the large 4-class NER model for German that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **92,31** (CoNLL-03 German revised) Predicts 4 tags: | **tag** | **meaning** | |---------------------------------|-----------| | PER | person name | | LOC | location name | | ORG | organization name | | MISC | other name | Based on document-level XLM-R embeddings and [FLERT](https://arxiv.org/pdf/2011.06993v1.pdf). --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-german-large") # make example sentence sentence = Sentence("George Washington ging nach Washington") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [1,2]: "George Washington" [− Labels: PER (1.0)] Span [5]: "Washington" [− Labels: LOC (1.0)] ``` So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington ging nach Washington*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python import torch # 1. get the corpus from flair.datasets import CONLL_03_GERMAN corpus = CONLL_03_GERMAN() # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize fine-tuneable transformer embeddings WITH document context from flair.embeddings import TransformerWordEmbeddings embeddings = TransformerWordEmbeddings( model='xlm-roberta-large', layers="-1", subtoken_pooling="first", fine_tune=True, use_context=True, ) # 5. initialize bare-bones sequence tagger (no CRF, no RNN, no reprojection) from flair.models import SequenceTagger tagger = SequenceTagger( hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type='ner', use_crf=False, use_rnn=False, reproject_embeddings=False, ) # 6. initialize trainer with AdamW optimizer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW) # 7. run training with XLM parameters (20 epochs, small LR) from torch.optim.lr_scheduler import OneCycleLR trainer.train('resources/taggers/ner-german-large', learning_rate=5.0e-6, mini_batch_size=4, mini_batch_chunk_size=1, max_epochs=20, scheduler=OneCycleLR, embeddings_storage_mode='none', weight_decay=0., ) ) ``` --- ### Cite Please cite the following paper when using this model. ``` @misc{schweter2020flert, title={FLERT: Document-Level Features for Named Entity Recognition}, author={Stefan Schweter and Alan Akbik}, year={2020}, eprint={2011.06993}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
{"language": "de", "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["conll2003"], "widget": [{"text": "George Washington ging nach Washington"}]}
flair/ner-german-large
null
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "dataset:conll2003", "arxiv:2011.06993", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
flair
## NER for German Legal Text in Flair (default model) This is the legal NER model for German that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **96,35** (LER German dataset) Predicts 19 tags: | **tag** | **meaning** | |---------------------------------|-----------| | AN | Anwalt | | EUN | Europäische Norm | | GS | Gesetz | | GRT | Gericht | | INN | Institution | | LD | Land | | LDS | Landschaft | | LIT | Literatur | | MRK | Marke | | ORG | Organisation | | PER | Person | | RR | Richter | | RS | Rechtssprechung | | ST | Stadt | | STR | Straße | | UN | Unternehmen | | VO | Verordnung | | VS | Vorschrift | | VT | Vertrag | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. More details on the Legal NER dataset [here](https://github.com/elenanereiss/Legal-Entity-Recognition) --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-german-legal") # make example sentence (don't use tokenizer since Rechtstexte are badly handled) sentence = Sentence("Herr W. verstieß gegen § 36 Abs. 7 IfSG.", use_tokenizer=False) # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [2]: "W." [− Labels: PER (0.9911)] Span [5,6,7,8,9]: "§ 36 Abs. 7 IfSG." [− Labels: GS (0.5353)] ``` So, the entities "*W.*" (labeled as a **person**) and "*§ 36 Abs. 7 IfSG*" (labeled as a **Gesetz**) are found in the sentence "*Herr W. verstieß gegen § 36 Abs. 7 IfSG.*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import LER_GERMAN from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. get the corpus corpus: Corpus = LER_GERMAN() # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # GloVe embeddings WordEmbeddings('de'), # contextual string embeddings, forward FlairEmbeddings('de-forward'), # contextual string embeddings, backward FlairEmbeddings('de-backward'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/ner-german-legal', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following papers when using this model. ``` @inproceedings{leitner2019fine, author = {Elena Leitner and Georg Rehm and Julian Moreno-Schneider}, title = {{Fine-grained Named Entity Recognition in Legal Documents}}, booktitle = {Semantic Systems. The Power of AI and Knowledge Graphs. Proceedings of the 15th International Conference (SEMANTiCS 2019)}, year = 2019, pages = {272--287}, pdf = {https://link.springer.com/content/pdf/10.1007%2F978-3-030-33220-4_20.pdf}} ``` ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
{"language": "de", "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["legal"], "widget": [{"text": "Herr W. verstie\u00df gegen \u00a7 36 Abs. 7 IfSG."}]}
flair/ner-german-legal
null
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "dataset:legal", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
flair
## German NER in Flair (default model) This is the standard 4-class NER model for German that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **87,94** (CoNLL-03 German revised) Predicts 4 tags: | **tag** | **meaning** | |---------------------------------|-----------| | PER | person name | | LOC | location name | | ORG | organization name | | MISC | other name | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-german") # make example sentence sentence = Sentence("George Washington ging nach Washington") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [1,2]: "George Washington" [− Labels: PER (0.9977)] Span [5]: "Washington" [− Labels: LOC (0.9895)] ``` So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington ging nach Washington*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import CONLL_03_GERMAN from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. get the corpus corpus: Corpus = CONLL_03_GERMAN() # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # GloVe embeddings WordEmbeddings('de'), # contextual string embeddings, forward FlairEmbeddings('de-forward'), # contextual string embeddings, backward FlairEmbeddings('de-backward'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/ner-german', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
{"language": "de", "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["conll2003"], "widget": [{"text": "George Washington ging nach Washington"}]}
flair/ner-german
null
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "dataset:conll2003", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
flair
## 4-Language NER in Flair (English, German, Dutch and Spanish) This is the fast 4-class NER model for 4 CoNLL-03 languages that ships with [Flair](https://github.com/flairNLP/flair/). Also kind of works for related languages like French. F1-Score: **91,51** (CoNLL-03 English), **85,72** (CoNLL-03 German revised), **86,22** (CoNLL-03 Dutch), **85,78** (CoNLL-03 Spanish) Predicts 4 tags: | **tag** | **meaning** | |---------------------------------|-----------| | PER | person name | | LOC | location name | | ORG | organization name | | MISC | other name | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-multi-fast") # make example sentence in any of the four languages sentence = Sentence("George Washington ging nach Washington") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [1,2]: "George Washington" [− Labels: PER (0.9977)] Span [5]: "Washington" [− Labels: LOC (0.9895)] ``` So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington ging nach Washington*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import CONLL_03, CONLL_03_GERMAN, CONLL_03_DUTCH, CONLL_03_SPANISH from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. get the multi-language corpus corpus: Corpus = MultiCorpus([ CONLL_03(), # English corpus CONLL_03_GERMAN(), # German corpus CONLL_03_DUTCH(), # Dutch corpus CONLL_03_SPANISH(), # Spanish corpus ]) # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # GloVe embeddings WordEmbeddings('glove'), # FastText embeddings WordEmbeddings('de'), # contextual string embeddings, forward FlairEmbeddings('multi-forward-fast'), # contextual string embeddings, backward FlairEmbeddings('multi-backward-fast'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/ner-multi-fast', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following papers when using this model. ``` @misc{akbik2019multilingual, title={Multilingual sequence labeling with one model}, author={Akbik, Alan and Bergmann, Tanja and Vollgraf, Roland} booktitle = {{NLDL} 2019, Northern Lights Deep Learning Workshop}, year = {2019} } ``` ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ```
{"language": ["en", "de", "nl", "es"], "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["conll2003"], "widget": [{"text": "George Washington ging nach Washington"}]}
flair/ner-multi-fast
null
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "de", "nl", "es", "dataset:conll2003", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
flair
## 4-Language NER in Flair (English, German, Dutch and Spanish) This is the standard 4-class NER model for 4 CoNLL-03 languages that ships with [Flair](https://github.com/flairNLP/flair/). Also kind of works for related languages like French. F1-Score: **92,16** (CoNLL-03 English), **87,33** (CoNLL-03 German revised), **88,96** (CoNLL-03 Dutch), **86,65** (CoNLL-03 Spanish) Predicts 4 tags: | **tag** | **meaning** | |---------------------------------|-----------| | PER | person name | | LOC | location name | | ORG | organization name | | MISC | other name | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-multi") # make example sentence in any of the four languages sentence = Sentence("George Washington ging nach Washington") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [1,2]: "George Washington" [− Labels: PER (0.9977)] Span [5]: "Washington" [− Labels: LOC (0.9895)] ``` So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington ging nach Washington*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import CONLL_03, CONLL_03_GERMAN, CONLL_03_DUTCH, CONLL_03_SPANISH from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. get the multi-language corpus corpus: Corpus = MultiCorpus([ CONLL_03(), # English corpus CONLL_03_GERMAN(), # German corpus CONLL_03_DUTCH(), # Dutch corpus CONLL_03_SPANISH(), # Spanish corpus ]) # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # GloVe embeddings WordEmbeddings('glove'), # FastText embeddings WordEmbeddings('de'), # contextual string embeddings, forward FlairEmbeddings('multi-forward'), # contextual string embeddings, backward FlairEmbeddings('multi-backward'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/ner-multi', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @misc{akbik2019multilingual, title={Multilingual sequence labeling with one model}, author={Akbik, Alan and Bergmann, Tanja and Vollgraf, Roland} booktitle = {{NLDL} 2019, Northern Lights Deep Learning Workshop}, year = {2019} } ``` ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ```
{"language": ["en", "de", "nl", "es", "multilingual"], "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["conll2003"], "widget": [{"text": "George Washington ging nach Washington"}]}
flair/ner-multi
null
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "de", "nl", "es", "multilingual", "dataset:conll2003", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
flair
## Spanish NER in Flair (large model) This is the large 4-class NER model for Spanish that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **90,54** (CoNLL-03 Spanish) Predicts 4 tags: | **tag** | **meaning** | |---------------------------------|-----------| | PER | person name | | LOC | location name | | ORG | organization name | | MISC | other name | Based on document-level XLM-R embeddings and [FLERT](https://arxiv.org/pdf/2011.06993v1.pdf/). --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-spanish-large") # make example sentence sentence = Sentence("George Washington fue a Washington") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [1,2]: "George Washington" [− Labels: PER (1.0)] Span [5]: "Washington" [− Labels: LOC (1.0)] ``` So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington fue a Washington*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python import torch # 1. get the corpus from flair.datasets import CONLL_03_SPANISH corpus = CONLL_03_SPANISH() # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize fine-tuneable transformer embeddings WITH document context from flair.embeddings import TransformerWordEmbeddings embeddings = TransformerWordEmbeddings( model='xlm-roberta-large', layers="-1", subtoken_pooling="first", fine_tune=True, use_context=True, ) # 5. initialize bare-bones sequence tagger (no CRF, no RNN, no reprojection) from flair.models import SequenceTagger tagger = SequenceTagger( hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type='ner', use_crf=False, use_rnn=False, reproject_embeddings=False, ) # 6. initialize trainer with AdamW optimizer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW) # 7. run training with XLM parameters (20 epochs, small LR) from torch.optim.lr_scheduler import OneCycleLR trainer.train('resources/taggers/ner-spanish-large', learning_rate=5.0e-6, mini_batch_size=4, mini_batch_chunk_size=1, max_epochs=20, scheduler=OneCycleLR, embeddings_storage_mode='none', weight_decay=0., ) ) ``` --- ### Cite Please cite the following paper when using this model. ``` @misc{schweter2020flert, title={FLERT: Document-Level Features for Named Entity Recognition}, author={Stefan Schweter and Alan Akbik}, year={2020}, eprint={2011.06993}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
{"language": "es", "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["conll2003"], "widget": [{"text": "George Washington fue a Washington"}]}
flair/ner-spanish-large
null
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "es", "dataset:conll2003", "arxiv:2011.06993", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
flair
## English Part-of-Speech Tagging in Flair (fast model) This is the fast part-of-speech tagging model for English that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **98,10** (Ontonotes) Predicts fine-grained POS tags: | **tag** | **meaning** | |---------------------------------|-----------| |ADD | Email | |AFX | Affix | |CC | Coordinating conjunction | |CD | Cardinal number | |DT | Determiner | |EX | Existential there | |FW | Foreign word | |HYPH | Hyphen | |IN | Preposition or subordinating conjunction | |JJ | Adjective | |JJR |Adjective, comparative | |JJS | Adjective, superlative | |LS | List item marker | |MD | Modal | |NFP | Superfluous punctuation | |NN | Noun, singular or mass | |NNP |Proper noun, singular | |NNPS | Proper noun, plural | |NNS |Noun, plural | |PDT | Predeterminer | |POS | Possessive ending | |PRP | Personal pronoun | |PRP$ | Possessive pronoun | |RB | Adverb | |RBR | Adverb, comparative | |RBS | Adverb, superlative | |RP | Particle | |SYM | Symbol | |TO | to | |UH | Interjection | |VB | Verb, base form | |VBD | Verb, past tense | |VBG | Verb, gerund or present participle | |VBN | Verb, past participle | |VBP | Verb, non-3rd person singular present | |VBZ | Verb, 3rd person singular present | |WDT | Wh-determiner | |WP | Wh-pronoun | |WP$ | Possessive wh-pronoun | |WRB | Wh-adverb | |XX | Unknown | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/pos-english-fast") # make example sentence sentence = Sentence("I love Berlin.") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('pos'): print(entity) ``` This yields the following output: ``` Span [1]: "I" [− Labels: PRP (1.0)] Span [2]: "love" [− Labels: VBP (0.9998)] Span [3]: "Berlin" [− Labels: NNP (0.9999)] Span [4]: "." [− Labels: . (0.9998)] ``` So, the word "*I*" is labeled as a **pronoun** (PRP), "*love*" is labeled as a **verb** (VBP) and "*Berlin*" is labeled as a **proper noun** (NNP) in the sentence "*I love Berlin*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import ColumnCorpus from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. load the corpus (Ontonotes does not ship with Flair, you need to download and reformat into a column format yourself) corpus: Corpus = ColumnCorpus( "resources/tasks/onto-ner", column_format={0: "text", 1: "pos", 2: "upos", 3: "ner"}, tag_to_bioes="ner", ) # 2. what tag do we want to predict? tag_type = 'pos' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # contextual string embeddings, forward FlairEmbeddings('news-forward'), # contextual string embeddings, backward FlairEmbeddings('news-backward'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/pos-english-fast', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
{"language": "en", "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["ontonotes"], "widget": [{"text": "I love Berlin."}]}
flair/pos-english-fast
null
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "dataset:ontonotes", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
flair
## English Part-of-Speech Tagging in Flair (default model) This is the standard part-of-speech tagging model for English that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **98,19** (Ontonotes) Predicts fine-grained POS tags: | **tag** | **meaning** | |---------------------------------|-----------| |ADD | Email | |AFX | Affix | |CC | Coordinating conjunction | |CD | Cardinal number | |DT | Determiner | |EX | Existential there | |FW | Foreign word | |HYPH | Hyphen | |IN | Preposition or subordinating conjunction | |JJ | Adjective | |JJR |Adjective, comparative | |JJS | Adjective, superlative | |LS | List item marker | |MD | Modal | |NFP | Superfluous punctuation | |NN | Noun, singular or mass | |NNP |Proper noun, singular | |NNPS | Proper noun, plural | |NNS |Noun, plural | |PDT | Predeterminer | |POS | Possessive ending | |PRP | Personal pronoun | |PRP$ | Possessive pronoun | |RB | Adverb | |RBR | Adverb, comparative | |RBS | Adverb, superlative | |RP | Particle | |SYM | Symbol | |TO | to | |UH | Interjection | |VB | Verb, base form | |VBD | Verb, past tense | |VBG | Verb, gerund or present participle | |VBN | Verb, past participle | |VBP | Verb, non-3rd person singular present | |VBZ | Verb, 3rd person singular present | |WDT | Wh-determiner | |WP | Wh-pronoun | |WP$ | Possessive wh-pronoun | |WRB | Wh-adverb | |XX | Unknown | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/pos-english") # make example sentence sentence = Sentence("I love Berlin.") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('pos'): print(entity) ``` This yields the following output: ``` Span [1]: "I" [− Labels: PRP (1.0)] Span [2]: "love" [− Labels: VBP (1.0)] Span [3]: "Berlin" [− Labels: NNP (0.9999)] Span [4]: "." [− Labels: . (1.0)] ``` So, the word "*I*" is labeled as a **pronoun** (PRP), "*love*" is labeled as a **verb** (VBP) and "*Berlin*" is labeled as a **proper noun** (NNP) in the sentence "*I love Berlin*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import ColumnCorpus from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. load the corpus (Ontonotes does not ship with Flair, you need to download and reformat into a column format yourself) corpus: Corpus = ColumnCorpus( "resources/tasks/onto-ner", column_format={0: "text", 1: "pos", 2: "upos", 3: "ner"}, tag_to_bioes="ner", ) # 2. what tag do we want to predict? tag_type = 'pos' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # contextual string embeddings, forward FlairEmbeddings('news-forward'), # contextual string embeddings, backward FlairEmbeddings('news-backward'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/pos-english', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
{"language": "en", "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["ontonotes"], "inference": false}
flair/pos-english
null
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "dataset:ontonotes", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
flair
## English Universal Part-of-Speech Tagging in Flair (fast model) This is the fast universal part-of-speech tagging model for English that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **98,47** (Ontonotes) Predicts universal POS tags: | **tag** | **meaning** | |---------------------------------|-----------| |ADJ | adjective | | ADP | adposition | | ADV | adverb | | AUX | auxiliary | | CCONJ | coordinating conjunction | | DET | determiner | | INTJ | interjection | | NOUN | noun | | NUM | numeral | | PART | particle | | PRON | pronoun | | PROPN | proper noun | | PUNCT | punctuation | | SCONJ | subordinating conjunction | | SYM | symbol | | VERB | verb | | X | other | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/upos-english-fast") # make example sentence sentence = Sentence("I love Berlin.") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('pos'): print(entity) ``` This yields the following output: ``` Span [1]: "I" [− Labels: PRON (0.9996)] Span [2]: "love" [− Labels: VERB (1.0)] Span [3]: "Berlin" [− Labels: PROPN (0.9986)] Span [4]: "." [− Labels: PUNCT (1.0)] ``` So, the word "*I*" is labeled as a **pronoun** (PRON), "*love*" is labeled as a **verb** (VERB) and "*Berlin*" is labeled as a **proper noun** (PROPN) in the sentence "*I love Berlin*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import ColumnCorpus from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. load the corpus (Ontonotes does not ship with Flair, you need to download and reformat into a column format yourself) corpus: Corpus = ColumnCorpus( "resources/tasks/onto-ner", column_format={0: "text", 1: "pos", 2: "upos", 3: "ner"}, tag_to_bioes="ner", ) # 2. what tag do we want to predict? tag_type = 'upos' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # contextual string embeddings, forward FlairEmbeddings('news-forward-fast'), # contextual string embeddings, backward FlairEmbeddings('news-backward-fast'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/upos-english-fast', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
{"language": "en", "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["ontonotes"], "widget": [{"text": "I love Berlin."}]}
flair/upos-english-fast
null
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "dataset:ontonotes", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
flair
## English Universal Part-of-Speech Tagging in Flair (default model) This is the standard universal part-of-speech tagging model for English that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **98,6** (Ontonotes) Predicts universal POS tags: | **tag** | **meaning** | |---------------------------------|-----------| |ADJ | adjective | | ADP | adposition | | ADV | adverb | | AUX | auxiliary | | CCONJ | coordinating conjunction | | DET | determiner | | INTJ | interjection | | NOUN | noun | | NUM | numeral | | PART | particle | | PRON | pronoun | | PROPN | proper noun | | PUNCT | punctuation | | SCONJ | subordinating conjunction | | SYM | symbol | | VERB | verb | | X | other | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/upos-english") # make example sentence sentence = Sentence("I love Berlin.") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('pos'): print(entity) ``` This yields the following output: ``` Span [1]: "I" [− Labels: PRON (0.9996)] Span [2]: "love" [− Labels: VERB (1.0)] Span [3]: "Berlin" [− Labels: PROPN (0.9986)] Span [4]: "." [− Labels: PUNCT (1.0)] ``` So, the word "*I*" is labeled as a **pronoun** (PRON), "*love*" is labeled as a **verb** (VERB) and "*Berlin*" is labeled as a **proper noun** (PROPN) in the sentence "*I love Berlin*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import ColumnCorpus from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. load the corpus (Ontonotes does not ship with Flair, you need to download and reformat into a column format yourself) corpus: Corpus = ColumnCorpus( "resources/tasks/onto-ner", column_format={0: "text", 1: "pos", 2: "upos", 3: "ner"}, tag_to_bioes="ner", ) # 2. what tag do we want to predict? tag_type = 'upos' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # contextual string embeddings, forward FlairEmbeddings('news-forward'), # contextual string embeddings, backward FlairEmbeddings('news-backward'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/upos-english', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
{"language": "en", "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["ontonotes"], "widget": [{"text": "I love Berlin."}]}
flair/upos-english
null
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "dataset:ontonotes", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
flair
## Multilingual Universal Part-of-Speech Tagging in Flair (fast model) This is the fast multilingual universal part-of-speech tagging model that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **92,88** (12 UD Treebanks covering English, German, French, Italian, Dutch, Polish, Spanish, Swedish, Danish, Norwegian, Finnish and Czech) Predicts universal POS tags: | **tag** | **meaning** | |---------------------------------|-----------| |ADJ | adjective | | ADP | adposition | | ADV | adverb | | AUX | auxiliary | | CCONJ | coordinating conjunction | | DET | determiner | | INTJ | interjection | | NOUN | noun | | NUM | numeral | | PART | particle | | PRON | pronoun | | PROPN | proper noun | | PUNCT | punctuation | | SCONJ | subordinating conjunction | | SYM | symbol | | VERB | verb | | X | other | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/upos-multi-fast") # make example sentence sentence = Sentence("Ich liebe Berlin, as they say. ") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('pos'): print(entity) ``` This yields the following output: ``` Span [1]: "Ich" [− Labels: PRON (0.9999)] Span [2]: "liebe" [− Labels: VERB (0.9999)] Span [3]: "Berlin" [− Labels: PROPN (0.9997)] Span [4]: "," [− Labels: PUNCT (1.0)] Span [5]: "as" [− Labels: SCONJ (0.9991)] Span [6]: "they" [− Labels: PRON (0.9998)] Span [7]: "say" [− Labels: VERB (0.9998)] Span [8]: "." [− Labels: PUNCT (1.0)] ``` So, the words "*Ich*" and "*they*" are labeled as **pronouns** (PRON), while "*liebe*" and "*say*" are labeled as **verbs** (VERB) in the multilingual sentence "*Ich liebe Berlin, as they say*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import MultiCorpus from flair.datasets import UD_ENGLISH, UD_GERMAN, UD_FRENCH, UD_ITALIAN, UD_POLISH, UD_DUTCH, UD_CZECH, \ UD_DANISH, UD_SPANISH, UD_SWEDISH, UD_NORWEGIAN, UD_FINNISH from flair.embeddings import StackedEmbeddings, FlairEmbeddings # 1. make a multi corpus consisting of 12 UD treebanks (in_memory=False here because this corpus becomes large) corpus = MultiCorpus([ UD_ENGLISH(in_memory=False), UD_GERMAN(in_memory=False), UD_DUTCH(in_memory=False), UD_FRENCH(in_memory=False), UD_ITALIAN(in_memory=False), UD_SPANISH(in_memory=False), UD_POLISH(in_memory=False), UD_CZECH(in_memory=False), UD_DANISH(in_memory=False), UD_SWEDISH(in_memory=False), UD_NORWEGIAN(in_memory=False), UD_FINNISH(in_memory=False), ]) # 2. what tag do we want to predict? tag_type = 'upos' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # contextual string embeddings, forward FlairEmbeddings('multi-forward-fast'), # contextual string embeddings, backward FlairEmbeddings('multi-backward-fast'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type, use_crf=False) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/upos-multi-fast', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
{"language": ["en", "de", "fr", "it", "nl", "pl", "es", "sv", "da", false, "fi", "cs"], "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["ontonotes"], "widget": [{"text": "Ich liebe Berlin, as they say."}]}
flair/upos-multi-fast
null
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "de", "fr", "it", "nl", "pl", "es", "sv", "da", "no", "fi", "cs", "dataset:ontonotes", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
flair
## Multilingual Universal Part-of-Speech Tagging in Flair (default model) This is the default multilingual universal part-of-speech tagging model that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **98,47** (12 UD Treebanks covering English, German, French, Italian, Dutch, Polish, Spanish, Swedish, Danish, Norwegian, Finnish and Czech) Predicts universal POS tags: | **tag** | **meaning** | |---------------------------------|-----------| |ADJ | adjective | | ADP | adposition | | ADV | adverb | | AUX | auxiliary | | CCONJ | coordinating conjunction | | DET | determiner | | INTJ | interjection | | NOUN | noun | | NUM | numeral | | PART | particle | | PRON | pronoun | | PROPN | proper noun | | PUNCT | punctuation | | SCONJ | subordinating conjunction | | SYM | symbol | | VERB | verb | | X | other | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/upos-multi") # make example sentence sentence = Sentence("Ich liebe Berlin, as they say. ") # predict POS tags tagger.predict(sentence) # print sentence print(sentence) # iterate over tokens and print the predicted POS label print("The following POS tags are found:") for token in sentence: print(token.get_label("upos")) ``` This yields the following output: ``` Token[0]: "Ich" → PRON (0.9999) Token[1]: "liebe" → VERB (0.9999) Token[2]: "Berlin" → PROPN (0.9997) Token[3]: "," → PUNCT (1.0) Token[4]: "as" → SCONJ (0.9991) Token[5]: "they" → PRON (0.9998) Token[6]: "say" → VERB (0.9998) Token[7]: "." → PUNCT (1.0) ``` So, the words "*Ich*" and "*they*" are labeled as **pronouns** (PRON), while "*liebe*" and "*say*" are labeled as **verbs** (VERB) in the multilingual sentence "*Ich liebe Berlin, as they say*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import MultiCorpus from flair.datasets import UD_ENGLISH, UD_GERMAN, UD_FRENCH, UD_ITALIAN, UD_POLISH, UD_DUTCH, UD_CZECH, \ UD_DANISH, UD_SPANISH, UD_SWEDISH, UD_NORWEGIAN, UD_FINNISH from flair.embeddings import StackedEmbeddings, FlairEmbeddings # 1. make a multi corpus consisting of 12 UD treebanks (in_memory=False here because this corpus becomes large) corpus = MultiCorpus([ UD_ENGLISH(in_memory=False), UD_GERMAN(in_memory=False), UD_DUTCH(in_memory=False), UD_FRENCH(in_memory=False), UD_ITALIAN(in_memory=False), UD_SPANISH(in_memory=False), UD_POLISH(in_memory=False), UD_CZECH(in_memory=False), UD_DANISH(in_memory=False), UD_SWEDISH(in_memory=False), UD_NORWEGIAN(in_memory=False), UD_FINNISH(in_memory=False), ]) # 2. what tag do we want to predict? tag_type = 'upos' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # contextual string embeddings, forward FlairEmbeddings('multi-forward'), # contextual string embeddings, backward FlairEmbeddings('multi-backward'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type, use_crf=False) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/upos-multi', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
{"language": ["en", "de", "fr", "it", "nl", "pl", "es", "sv", "da", false, "fi", "cs"], "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["ontonotes"], "widget": [{"text": "Ich liebe Berlin, as they say"}]}
flair/upos-multi
null
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "de", "fr", "it", "nl", "pl", "es", "sv", "da", "no", "fi", "cs", "dataset:ontonotes", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
flair
## Test model README Some test README description
{"tags": ["flair", "token-classification"], "widget": [{"text": "does this work"}]}
flairbook/flairmodel
null
[ "flair", "pytorch", "token-classification", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
flair
## Test model README Some test README description
{"tags": ["flair", "token-classification"], "widget": [{"text": "does this work"}]}
flairbook2/flairmodel
null
[ "flair", "pytorch", "token-classification", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Marty DialoGPT Model
{"tags": ["conversational"]}
flakje/DialoGPT-small-Marty
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
flank/aquabotmediumnew
null
[ "tensorboard", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
flanqi/distilbert-base-uncased-finetuned-emotion
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
flareau/distilgpt2-finetuned-wikitext2
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# FlauBERT: Unsupervised Language Model Pre-training for French **FlauBERT** is a French BERT trained on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/eng/jean-zay/ ) supercomputer. Along with FlauBERT comes [**FLUE**](https://github.com/getalp/Flaubert/tree/master/flue): an evaluation setup for French NLP systems similar to the popular GLUE benchmark. The goal is to enable further reproducible experiments in the future and to share models and progress on the French language.For more details please refer to the [official website](https://github.com/getalp/Flaubert). ## FlauBERT models | Model name | Number of layers | Attention Heads | Embedding Dimension | Total Parameters | | :------: | :---: | :---: | :---: | :---: | | `flaubert-small-cased` | 6 | 8 | 512 | 54 M | | `flaubert-base-uncased` | 12 | 12 | 768 | 137 M | | `flaubert-base-cased` | 12 | 12 | 768 | 138 M | | `flaubert-large-cased` | 24 | 16 | 1024 | 373 M | **Note:** `flaubert-small-cased` is partially trained so performance is not guaranteed. Consider using it for debugging purpose only. ## Using FlauBERT with Hugging Face's Transformers ```python import torch from transformers import FlaubertModel, FlaubertTokenizer # Choose among ['flaubert/flaubert_small_cased', 'flaubert/flaubert_base_uncased', # 'flaubert/flaubert_base_cased', 'flaubert/flaubert_large_cased'] modelname = 'flaubert/flaubert_base_cased' # Load pretrained model and tokenizer flaubert, log = FlaubertModel.from_pretrained(modelname, output_loading_info=True) flaubert_tokenizer = FlaubertTokenizer.from_pretrained(modelname, do_lowercase=False) # do_lowercase=False if using cased models, True if using uncased ones sentence = "Le chat mange une pomme." token_ids = torch.tensor([flaubert_tokenizer.encode(sentence)]) last_layer = flaubert(token_ids)[0] print(last_layer.shape) # torch.Size([1, 8, 768]) -> (batch size x number of tokens x embedding dimension) # The BERT [CLS] token correspond to the first hidden state of the last layer cls_embedding = last_layer[:, 0, :] ``` **Notes:** if your `transformers` version is <=2.10.0, `modelname` should take one of the following values: ``` ['flaubert-small-cased', 'flaubert-base-uncased', 'flaubert-base-cased', 'flaubert-large-cased'] ``` ## References If you use FlauBERT or the FLUE Benchmark for your scientific publication, or if you find the resources in this repository useful, please cite one of the following papers: [LREC paper](http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.302.pdf) ``` @InProceedings{le2020flaubert, author = {Le, Hang and Vial, Lo\"{i}c and Frej, Jibril and Segonne, Vincent and Coavoux, Maximin and Lecouteux, Benjamin and Allauzen, Alexandre and Crabb\'{e}, Beno\^{i}t and Besacier, Laurent and Schwab, Didier}, title = {FlauBERT: Unsupervised Language Model Pre-training for French}, booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference}, month = {May}, year = {2020}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {2479--2490}, url = {https://www.aclweb.org/anthology/2020.lrec-1.302} } ``` [TALN paper](https://hal.archives-ouvertes.fr/hal-02784776/) ``` @inproceedings{le2020flaubert, title = {FlauBERT: des mod{\`e}les de langue contextualis{\'e}s pr{\'e}-entra{\^\i}n{\'e}s pour le fran{\c{c}}ais}, author = {Le, Hang and Vial, Lo{\"\i}c and Frej, Jibril and Segonne, Vincent and Coavoux, Maximin and Lecouteux, Benjamin and Allauzen, Alexandre and Crabb{\'e}, Beno{\^\i}t and Besacier, Laurent and Schwab, Didier}, booktitle = {Actes de la 6e conf{\'e}rence conjointe Journ{\'e}es d'{\'E}tudes sur la Parole (JEP, 31e {\'e}dition), Traitement Automatique des Langues Naturelles (TALN, 27e {\'e}dition), Rencontre des {\'E}tudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (R{\'E}CITAL, 22e {\'e}dition). Volume 2: Traitement Automatique des Langues Naturelles}, pages = {268--278}, year = {2020}, organization = {ATALA} } ```
{"language": "fr", "license": "mit", "tags": ["bert", "language-model", "flaubert", "flue", "french", "bert-base", "flaubert-base", "cased"], "datasets": ["flaubert"], "metrics": ["flue"]}
flaubert/flaubert_base_cased
null
[ "transformers", "pytorch", "flaubert", "fill-mask", "bert", "language-model", "flue", "french", "bert-base", "flaubert-base", "cased", "fr", "dataset:flaubert", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# FlauBERT: Unsupervised Language Model Pre-training for French **FlauBERT** is a French BERT trained on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/eng/jean-zay/ ) supercomputer. Along with FlauBERT comes [**FLUE**](https://github.com/getalp/Flaubert/tree/master/flue): an evaluation setup for French NLP systems similar to the popular GLUE benchmark. The goal is to enable further reproducible experiments in the future and to share models and progress on the French language.For more details please refer to the [official website](https://github.com/getalp/Flaubert). ## FlauBERT models | Model name | Number of layers | Attention Heads | Embedding Dimension | Total Parameters | | :------: | :---: | :---: | :---: | :---: | | `flaubert-small-cased` | 6 | 8 | 512 | 54 M | | `flaubert-base-uncased` | 12 | 12 | 768 | 137 M | | `flaubert-base-cased` | 12 | 12 | 768 | 138 M | | `flaubert-large-cased` | 24 | 16 | 1024 | 373 M | **Note:** `flaubert-small-cased` is partially trained so performance is not guaranteed. Consider using it for debugging purpose only. ## Using FlauBERT with Hugging Face's Transformers ```python import torch from transformers import FlaubertModel, FlaubertTokenizer # Choose among ['flaubert/flaubert_small_cased', 'flaubert/flaubert_base_uncased', # 'flaubert/flaubert_base_cased', 'flaubert/flaubert_large_cased'] modelname = 'flaubert/flaubert_base_cased' # Load pretrained model and tokenizer flaubert, log = FlaubertModel.from_pretrained(modelname, output_loading_info=True) flaubert_tokenizer = FlaubertTokenizer.from_pretrained(modelname, do_lowercase=False) # do_lowercase=False if using cased models, True if using uncased ones sentence = "Le chat mange une pomme." token_ids = torch.tensor([flaubert_tokenizer.encode(sentence)]) last_layer = flaubert(token_ids)[0] print(last_layer.shape) # torch.Size([1, 8, 768]) -> (batch size x number of tokens x embedding dimension) # The BERT [CLS] token correspond to the first hidden state of the last layer cls_embedding = last_layer[:, 0, :] ``` **Notes:** if your `transformers` version is <=2.10.0, `modelname` should take one of the following values: ``` ['flaubert-small-cased', 'flaubert-base-uncased', 'flaubert-base-cased', 'flaubert-large-cased'] ``` ## References If you use FlauBERT or the FLUE Benchmark for your scientific publication, or if you find the resources in this repository useful, please cite one of the following papers: [LREC paper](http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.302.pdf) ``` @InProceedings{le2020flaubert, author = {Le, Hang and Vial, Lo\"{i}c and Frej, Jibril and Segonne, Vincent and Coavoux, Maximin and Lecouteux, Benjamin and Allauzen, Alexandre and Crabb\'{e}, Beno\^{i}t and Besacier, Laurent and Schwab, Didier}, title = {FlauBERT: Unsupervised Language Model Pre-training for French}, booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference}, month = {May}, year = {2020}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {2479--2490}, url = {https://www.aclweb.org/anthology/2020.lrec-1.302} } ``` [TALN paper](https://hal.archives-ouvertes.fr/hal-02784776/) ``` @inproceedings{le2020flaubert, title = {FlauBERT: des mod{\`e}les de langue contextualis{\'e}s pr{\'e}-entra{\^\i}n{\'e}s pour le fran{\c{c}}ais}, author = {Le, Hang and Vial, Lo{\"\i}c and Frej, Jibril and Segonne, Vincent and Coavoux, Maximin and Lecouteux, Benjamin and Allauzen, Alexandre and Crabb{\'e}, Beno{\^\i}t and Besacier, Laurent and Schwab, Didier}, booktitle = {Actes de la 6e conf{\'e}rence conjointe Journ{\'e}es d'{\'E}tudes sur la Parole (JEP, 31e {\'e}dition), Traitement Automatique des Langues Naturelles (TALN, 27e {\'e}dition), Rencontre des {\'E}tudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (R{\'E}CITAL, 22e {\'e}dition). Volume 2: Traitement Automatique des Langues Naturelles}, pages = {268--278}, year = {2020}, organization = {ATALA} } ```
{"language": "fr", "license": "mit", "tags": ["bert", "language-model", "flaubert", "flue", "french", "flaubert-base", "uncased"], "datasets": ["flaubert"], "metrics": ["flue"]}
flaubert/flaubert_base_uncased
null
[ "transformers", "pytorch", "flaubert", "fill-mask", "bert", "language-model", "flue", "french", "flaubert-base", "uncased", "fr", "dataset:flaubert", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# FlauBERT: Unsupervised Language Model Pre-training for French **FlauBERT** is a French BERT trained on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/eng/jean-zay/ ) supercomputer. Along with FlauBERT comes [**FLUE**](https://github.com/getalp/Flaubert/tree/master/flue): an evaluation setup for French NLP systems similar to the popular GLUE benchmark. The goal is to enable further reproducible experiments in the future and to share models and progress on the French language.For more details please refer to the [official website](https://github.com/getalp/Flaubert). ## FlauBERT models | Model name | Number of layers | Attention Heads | Embedding Dimension | Total Parameters | | :------: | :---: | :---: | :---: | :---: | | `flaubert-small-cased` | 6 | 8 | 512 | 54 M | | `flaubert-base-uncased` | 12 | 12 | 768 | 137 M | | `flaubert-base-cased` | 12 | 12 | 768 | 138 M | | `flaubert-large-cased` | 24 | 16 | 1024 | 373 M | **Note:** `flaubert-small-cased` is partially trained so performance is not guaranteed. Consider using it for debugging purpose only. ## Using FlauBERT with Hugging Face's Transformers ```python import torch from transformers import FlaubertModel, FlaubertTokenizer # Choose among ['flaubert/flaubert_small_cased', 'flaubert/flaubert_base_uncased', # 'flaubert/flaubert_base_cased', 'flaubert/flaubert_large_cased'] modelname = 'flaubert/flaubert_base_cased' # Load pretrained model and tokenizer flaubert, log = FlaubertModel.from_pretrained(modelname, output_loading_info=True) flaubert_tokenizer = FlaubertTokenizer.from_pretrained(modelname, do_lowercase=False) # do_lowercase=False if using cased models, True if using uncased ones sentence = "Le chat mange une pomme." token_ids = torch.tensor([flaubert_tokenizer.encode(sentence)]) last_layer = flaubert(token_ids)[0] print(last_layer.shape) # torch.Size([1, 8, 768]) -> (batch size x number of tokens x embedding dimension) # The BERT [CLS] token correspond to the first hidden state of the last layer cls_embedding = last_layer[:, 0, :] ``` **Notes:** if your `transformers` version is <=2.10.0, `modelname` should take one of the following values: ``` ['flaubert-small-cased', 'flaubert-base-uncased', 'flaubert-base-cased', 'flaubert-large-cased'] ``` ## References If you use FlauBERT or the FLUE Benchmark for your scientific publication, or if you find the resources in this repository useful, please cite one of the following papers: [LREC paper](http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.302.pdf) ``` @InProceedings{le2020flaubert, author = {Le, Hang and Vial, Lo\"{i}c and Frej, Jibril and Segonne, Vincent and Coavoux, Maximin and Lecouteux, Benjamin and Allauzen, Alexandre and Crabb\'{e}, Beno\^{i}t and Besacier, Laurent and Schwab, Didier}, title = {FlauBERT: Unsupervised Language Model Pre-training for French}, booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference}, month = {May}, year = {2020}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {2479--2490}, url = {https://www.aclweb.org/anthology/2020.lrec-1.302} } ``` [TALN paper](https://hal.archives-ouvertes.fr/hal-02784776/) ``` @inproceedings{le2020flaubert, title = {FlauBERT: des mod{\`e}les de langue contextualis{\'e}s pr{\'e}-entra{\^\i}n{\'e}s pour le fran{\c{c}}ais}, author = {Le, Hang and Vial, Lo{\"\i}c and Frej, Jibril and Segonne, Vincent and Coavoux, Maximin and Lecouteux, Benjamin and Allauzen, Alexandre and Crabb{\'e}, Beno{\^\i}t and Besacier, Laurent and Schwab, Didier}, booktitle = {Actes de la 6e conf{\'e}rence conjointe Journ{\'e}es d'{\'E}tudes sur la Parole (JEP, 31e {\'e}dition), Traitement Automatique des Langues Naturelles (TALN, 27e {\'e}dition), Rencontre des {\'E}tudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (R{\'E}CITAL, 22e {\'e}dition). Volume 2: Traitement Automatique des Langues Naturelles}, pages = {268--278}, year = {2020}, organization = {ATALA} } ```
{"language": "fr", "license": "mit", "tags": ["bert", "language-model", "flaubert", "flue", "french", "bert-large", "flaubert-large", "cased"], "datasets": ["flaubert"], "metrics": ["flue"]}
flaubert/flaubert_large_cased
null
[ "transformers", "pytorch", "flaubert", "fill-mask", "bert", "language-model", "flue", "french", "bert-large", "flaubert-large", "cased", "fr", "dataset:flaubert", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# FlauBERT: Unsupervised Language Model Pre-training for French **FlauBERT** is a French BERT trained on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/eng/jean-zay/ ) supercomputer. Along with FlauBERT comes [**FLUE**](https://github.com/getalp/Flaubert/tree/master/flue): an evaluation setup for French NLP systems similar to the popular GLUE benchmark. The goal is to enable further reproducible experiments in the future and to share models and progress on the French language.For more details please refer to the [official website](https://github.com/getalp/Flaubert). ## FlauBERT models | Model name | Number of layers | Attention Heads | Embedding Dimension | Total Parameters | | :------: | :---: | :---: | :---: | :---: | | `flaubert-small-cased` | 6 | 8 | 512 | 54 M | | `flaubert-base-uncased` | 12 | 12 | 768 | 137 M | | `flaubert-base-cased` | 12 | 12 | 768 | 138 M | | `flaubert-large-cased` | 24 | 16 | 1024 | 373 M | **Note:** `flaubert-small-cased` is partially trained so performance is not guaranteed. Consider using it for debugging purpose only. ## Using FlauBERT with Hugging Face's Transformers ```python import torch from transformers import FlaubertModel, FlaubertTokenizer # Choose among ['flaubert/flaubert_small_cased', 'flaubert/flaubert_base_uncased', # 'flaubert/flaubert_base_cased', 'flaubert/flaubert_large_cased'] modelname = 'flaubert/flaubert_base_cased' # Load pretrained model and tokenizer flaubert, log = FlaubertModel.from_pretrained(modelname, output_loading_info=True) flaubert_tokenizer = FlaubertTokenizer.from_pretrained(modelname, do_lowercase=False) # do_lowercase=False if using cased models, True if using uncased ones sentence = "Le chat mange une pomme." token_ids = torch.tensor([flaubert_tokenizer.encode(sentence)]) last_layer = flaubert(token_ids)[0] print(last_layer.shape) # torch.Size([1, 8, 768]) -> (batch size x number of tokens x embedding dimension) # The BERT [CLS] token correspond to the first hidden state of the last layer cls_embedding = last_layer[:, 0, :] ``` **Notes:** if your `transformers` version is <=2.10.0, `modelname` should take one of the following values: ``` ['flaubert-small-cased', 'flaubert-base-uncased', 'flaubert-base-cased', 'flaubert-large-cased'] ``` ## References If you use FlauBERT or the FLUE Benchmark for your scientific publication, or if you find the resources in this repository useful, please cite one of the following papers: [LREC paper](http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.302.pdf) ``` @InProceedings{le2020flaubert, author = {Le, Hang and Vial, Lo\"{i}c and Frej, Jibril and Segonne, Vincent and Coavoux, Maximin and Lecouteux, Benjamin and Allauzen, Alexandre and Crabb\'{e}, Beno\^{i}t and Besacier, Laurent and Schwab, Didier}, title = {FlauBERT: Unsupervised Language Model Pre-training for French}, booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference}, month = {May}, year = {2020}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {2479--2490}, url = {https://www.aclweb.org/anthology/2020.lrec-1.302} } ``` [TALN paper](https://hal.archives-ouvertes.fr/hal-02784776/) ``` @inproceedings{le2020flaubert, title = {FlauBERT: des mod{\`e}les de langue contextualis{\'e}s pr{\'e}-entra{\^\i}n{\'e}s pour le fran{\c{c}}ais}, author = {Le, Hang and Vial, Lo{\"\i}c and Frej, Jibril and Segonne, Vincent and Coavoux, Maximin and Lecouteux, Benjamin and Allauzen, Alexandre and Crabb{\'e}, Beno{\^\i}t and Besacier, Laurent and Schwab, Didier}, booktitle = {Actes de la 6e conf{\'e}rence conjointe Journ{\'e}es d'{\'E}tudes sur la Parole (JEP, 31e {\'e}dition), Traitement Automatique des Langues Naturelles (TALN, 27e {\'e}dition), Rencontre des {\'E}tudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (R{\'E}CITAL, 22e {\'e}dition). Volume 2: Traitement Automatique des Langues Naturelles}, pages = {268--278}, year = {2020}, organization = {ATALA} } ```
{"language": "fr", "license": "mit", "tags": ["bert", "language-model", "flaubert", "flue", "french", "flaubert-small", "cased"], "datasets": ["flaubert"], "metrics": ["flue"]}
flaubert/flaubert_small_cased
null
[ "transformers", "pytorch", "flaubert", "fill-mask", "bert", "language-model", "flue", "french", "flaubert-small", "cased", "fr", "dataset:flaubert", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
MLM fine-tuned from Bertimbau-Base model on the Brazilian Federal Official Gazette (200k instances)
{}
flavio-nakasato/berdou_200k
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
MLM fine-tuned from Bertimbau-Base model on the Brazilian Federal Official Gazette (500k instances)
{}
flavio-nakasato/berdou_500k
null
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
RoBERTa model pretrained on the Brazilian Federal Official Gazette (200k instances).
{}
flavio-nakasato/deeppolicytracker_200k
null
[ "transformers", "pytorch", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
RoBERTa model pretrained on the Brazilian Federal Official Gazette (500k instances).
{}
flavio-nakasato/deeppolicytracker_500k
null
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
MLM fine-tuned from BR-BERTo model on the Brazilian Federal Official Gazette (100k instances)
{}
flavio-nakasato/roberdou_100k
null
[ "transformers", "pytorch", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
flax-community/Bengali-t5
null
[ "transformers", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
flax-community/GPT2-korean
null
[ "transformers", "jax", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# Image-captioning-Indonesia This is an encoder-decoder image captioning model using [CLIP](https://huggingface.co/transformers/model_doc/clip.html) as the visual encoder and [Marian](https://huggingface.co/transformers/model_doc/marian.html) as the textual decoder on datasets with Indonesian captions. This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) organized by [HuggingFace](https://huggingface.co). All training was done on a TPUv3-8 VM sponsored by the Google Cloud team. ## How to use At time of writing, you will need to install [HuggingFace](https://github.com/huggingface/) from its latest master branch in order to load `FlaxMarian`. You will also need to have the [`flax_clip_vision_marian` folder](https://github.com/indonesian-nlp/Indonesia-Image-Captioning/tree/main/flax_clip_vision_marian) in your project directory to load the model using the `FlaxCLIPVisionMarianForConditionalGeneration` class. ```python from torchvision.io import ImageReadMode, read_image from torchvision.transforms import CenterCrop, ConvertImageDtype, Normalize, Resize from torchvision.transforms.functional import InterpolationMode import torch import numpy as np from transformers import MarianTokenizer from flax_clip_vision_marian.modeling_clip_vision_marian import FlaxCLIPVisionMarianForConditionalGeneration clip_marian_model_name = 'flax-community/Image-captioning-Indonesia' model = FlaxCLIPVisionMarianForConditionalGeneration.from_pretrained(clip_marian_model_name) marian_model_name = 'Helsinki-NLP/opus-mt-en-id' tokenizer = MarianTokenizer.from_pretrained(marian_model_name) config = model.config image_size = config.clip_vision_config.image_size # Image transformation transforms = torch.nn.Sequential( Resize([image_size], interpolation=InterpolationMode.BICUBIC), CenterCrop(image_size), ConvertImageDtype(torch.float), Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)), ) # Hyperparameters max_length = 8 num_beams = 4 gen_kwargs = {"max_length": max_length, "num_beams": num_beams} def generate_step(batch): output_ids = model.generate(pixel_values, **gen_kwargs) token_ids = np.array(output_ids.sequences)[0] caption = tokenizer.decode(token_ids) return caption image_file_path = image_file_path image = read_image(image_file_path, mode=ImageReadMode.RGB) image = transforms(image) pixel_values = torch.stack([image]).permute(0, 2, 3, 1).numpy() generated_ids = generate_step(pixel_values) print(generated_ids) ``` ## Training data The Model was trained on translated Coco,Flickr and ViZWiz, each of them were translated using google translate and marian mt. we took only random 2 captions per image for each datasets ## Training procedure The model was trained on a TPUv3-8 VM provided by the Google Cloud team. ## Team members - Cahya Wirawan ([@cahya](https://huggingface.co/cahya)) - Galuh Sahid ([@Galuh](https://huggingface.co/Galuh)) - Muhammad Agung Hambali ([@AyameRushia](https://huggingface.co/AyameRushia)) - Samsul Rahmadani ([@munggok](https://huggingface.co/munggok))
{"language": "id"}
flax-community/Image-captioning-Indonesia
null
[ "transformers", "jax", "clip-vision-marian", "text2text-generation", "id", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
flax-community/IndoClip
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
# Neural ODE with Flax This is the result of project ["Reproduce Neural ODE and SDE"][projectlink] in [HuggingFace Flax/JAX community week][comweeklink]. <code>main.py</code> will execute training of ResNet or OdeNet for MNIST dataset. [projectlink]: https://discuss.huggingface.co/t/reproduce-neural-ode-and-neural-sde/7590 [comweeklink]: https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects#projects ## Dependency ### JAX and Flax For JAX installation, please follow [here][jaxinstalllink]. or simply, type ```bash pip install jax jaxlib ``` For Flax installation, ```bash pip install flax ``` [jaxinstalllink]: https://github.com/google/jax#installation Tensorflow-datasets will download MNIST dataset to environment. ## How to run training For (small) ResNet training, ```bash python main.py --model=resnet --lr=1e-4 --n_epoch=20 --batch_size=64 ``` For Neural ODE training, ```bash python main.py --model=odenet --lr=1e-4 --n_epoch=20 --batch_size=64 ``` For Continuous Normalizing Flow, ```bash python main.py --model=cnf --sample_dataset=circles ``` Sample datasets can be chosen as circles, moons, or scurve. # Sample Results ![cnf-viz](https://user-images.githubusercontent.com/72425253/126124351-44e00438-055e-4b1c-90ee-758a545dd602.gif) ![cnf-viz](https://user-images.githubusercontent.com/72425253/126124648-dcb3f8f4-396a-447c-96cf-f9304377fa48.gif) ![cnf-viz](https://user-images.githubusercontent.com/72425253/126127269-4c02ee6a-a9a3-4b9f-b380-f8669f58872b.gif) # Bird Call generation Score SDE These are the codes for the bird call generation score sde model. <code>core-sde-sampler.py</code> will execute the sampler. The sampler uses pretrained weight to generate bird calls. The weight can be found [here](https://github.com/mandelbrot-walker/Birdcall-score-sde/blob/main/ckpt.flax) For using different sample generation parameters change the argument values. For example, ```bash python main.py --sigma=25 --num_steps=500 --signal_to_noise_ratio=0.10 --etol=1e-5 --sample_batch_size = 128 --sample_no = 47 ``` In order to generate the audios, these dependencies are required, ```bash pip install librosa pip install soundfile ``` In order to train the model from scratch, please generate the dataset using this [link](www.kaggle.com/ibraheemmoosa/birdsong-spectogram-generation). The dataset is generated in kaggle. Therefore, during training your username and api key is required in the specified section inside the script. ```bash python main.py --sigma=35 --n_epochs=1000 --batch_size=512 --lr=1e-3 --num_steps=500 --signal_to_noise_ratio=0.15 --etol=1e-5 --sample_batch_size = 64 --sample_no = 23 ``` Generated samples can be found [here](https://github.com/mandelbrot-walker/Birdcall-score-sde/tree/main/generated_samples) and [here](https://colab.research.google.com/drive/1AbF4aIMkSfNs-G__MXzqY7JSrz6qvLYN)
{}
flax-community/NeuralODE_SDE
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# NOTE: We have trained newer and better Finnish RoBERTa large model which can be found from different repository: [https://huggingface.co/Finnish-NLP/roberta-large-finnish](https://huggingface.co/Finnish-NLP/roberta-large-finnish). Our future Finnish models will be available at the [Finnish-NLP](https://huggingface.co/Finnish-NLP) Hugging Face organization # RoBERTa large model for Finnish Pretrained model on Finnish language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1907.11692) and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta). This model is case-sensitive: it makes a difference between finnish and Finnish. ## Model description RoBERTa is a transformers model pretrained on a large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the RoBERTa model as inputs. ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='flax-community/RoBERTa-large-finnish') >>> unmasker("Moikka olen <mask> kielimalli.") [{'sequence': 'Moikka olen uusi kielimalli.', 'score': 0.05129234120249748, 'token': 1825, 'token_str': ' uusi'}, {'sequence': 'Moikka olen toinen kielimalli.', 'score': 0.03112379088997841, 'token': 2194, 'token_str': ' toinen'}, {'sequence': 'Moikka olen myös kielimalli.', 'score': 0.025534993037581444, 'token': 491, 'token_str': ' myös'}, {'sequence': 'Moikka olen ensimmäinen kielimalli.', 'score': 0.020146571099758148, 'token': 2832, 'token_str': ' ensimmäinen'}, {'sequence': 'Moikka olen vapaa kielimalli.', 'score': 0.018089469522237778, 'token': 2257, 'token_str': ' vapaa'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('flax-community/RoBERTa-large-finnish') model = RobertaModel.from_pretrained('flax-community/RoBERTa-large-finnish') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('flax-community/RoBERTa-large-finnish') model = TFRobertaModel.from_pretrained('flax-community/RoBERTa-large-finnish', from_pt=True) text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. ## Training data This Finnish RoBERTa model was pretrained on the combination of two datasets: - [mc4](https://huggingface.co/datasets/mc4), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset - [Yle Finnish News Archive](http://urn.fi/urn:nbn:fi:lb-2017070501) Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 51GB of text. ## Training procedure ### Preprocessing The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked with `<s>` and the end of one by `</s>` The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `<mask>`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed). ### Pretraining The model was trained on TPUv3-8 VM, sponsored by the Hugging Face JAX/Flax community week event, for 2 epochs with a sequence length of 128 and continuing for one more epoch with a sequence length of 512. The optimizer used is Adafactor with a learning rate of 2e-4, \\(\beta_{1} = 0.9\\), \\(\beta_{2} = 0.98\\) and \\(\epsilon = 1e-6\\), learning rate warmup for 1500 steps and linear decay of the learning rate after. ## Evaluation results Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length. When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) and to our newer [Finnish RoBERTa-large](https://huggingface.co/Finnish-NLP/roberta-large-finnish) trained with larger dataset: | | Average | Yle News 128 length | Yle News 512 length | Eduskunta 128 length | |----------------------------------------|----------|---------------------|---------------------|----------------------| |flax-community/RoBERTa-large-finnish |87.72 |94.42 |95.06 |73.67 | |Finnish-NLP/roberta-large-finnish |88.02 |94.53 |95.23 |74.30 | |TurkuNLP/bert-base-finnish-cased-v1 |**88.82** |**94.90** |**95.49** |**76.07** | To conclude, this model slightly loses to our newer [Finnish RoBERTa-large](https://huggingface.co/Finnish-NLP/roberta-large-finnish) model trained with larger dataset and also slightly loses to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model. ## Team Members - Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/) - Rasmus Toivanen [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/) - Tommi Vehviläinen [Hugging Face profile](https://huggingface.co/Tommi) Feel free to contact us for more details 🤗
{"language": ["fi"], "license": "apache-2.0", "tags": ["finnish", "roberta"], "datasets": ["mc4"], "widget": [{"text": "Moikka olen <mask> kielimalli."}]}
flax-community/RoBERTa-large-finnish
null
[ "transformers", "pytorch", "jax", "tensorboard", "roberta", "fill-mask", "finnish", "fi", "dataset:mc4", "arxiv:1907.11692", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Sinhala GPT2 trained on MC4 (manually cleaned) ### Overview This is a smaller GPT2 model trained on [MC4](https://github.com/allenai/allennlp/discussions/5056) Sinhala dataset. As Sinhala is one of those low resource languages, there are only a handful of models been trained. So, this would be a great place to start training for more downstream tasks. This model uses a manually cleaned version of MC4 dataset which can be found [here](https://huggingface.co/datasets/keshan/clean-si-mc4). Although the dataset is relatively small ~3GB. The finetuned model on [news articles](https://huggingface.co/keshan/sinhala-gpt2-newswire) generates good and acceptable results. ## Model Specification The model chosen for training is GPT2 with the following specifications: 1. vocab_size=50257 2. n_embd=768 3. n_head=12 4. n_layer=12 5. n_positions=1024 ## How to Use You can use this model directly with a pipeline for causal language modeling: ```py from transformers import pipeline generator = pipeline('text-generation', model='flax-community/Sinhala-gpt2') generator("මම", max_length=50, num_return_sequences=5) ```
{"language": "si", "tags": ["Sinhala", "text-generation", "gpt2"], "datasets": ["mc4"]}
flax-community/Sinhala-gpt2
null
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "gpt2", "feature-extraction", "Sinhala", "text-generation", "si", "dataset:mc4", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
## Sinhala Roberta model trained on MC4 Sinhala dataset (manually cleaned)
{"language": "si", "tags": ["fill-mask", "sinhala", "roberta"]}
flax-community/Sinhala-roberta
null
[ "transformers", "pytorch", "jax", "tensorboard", "roberta", "feature-extraction", "fill-mask", "sinhala", "si", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><b>Update:</b> This model has been moved to <a href="https://huggingface.co/linhd-postdata/alberti-bert-base-multilingual-cased">linhd-postdata/alberti-bert-base-multilingual-cased</a>, where it will be maintained and updated. </p> </div> # ALBERTI ALBERTI is a set of two BERT-based multilingual model for poetry. One for verses and another one for stanzas. This model has been further trained with the PULPO corpus for verses using [Flax](https://github.com/google/flax), including training scripts. This is part of the [Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organised by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google. ## PULPO PULPO, the Prodigious Unannotated Literary Poetry Corpus, is a set of multilingual corpora of verses and stanzas with over 95M words. The following corpora has been downloaded using the [Averell](https://github.com/linhd-postdata/averell/) tool, developed by the [POSTDATA](https://postdata.linhd.uned.es/) team: ### Spanish - [Disco v3](https://github.com/pruizf/disco) - [Corpus of Spanish Golden-Age Sonnets](https://github.com/bncolorado/CorpusSonetosSigloDeOro) - [Corpus general de poesía lírica castellana del Siglo de Oro](https://github.com/bncolorado/CorpusGeneralPoesiaLiricaCastellanaDelSigloDeOro) - [Gongocorpus](https://github.com/linhd-postdata/gongocorpus) - [source](http://obvil.sorbonne-universite.site/corpus/gongora/gongora_obra-poetica) ### English - [Eighteenth-Century Poetry Archive (ECPA)](https://github.com/alhuber1502/ECPA) - [For better for verse](https://github.com/waynegraham/for_better_for_verse) ### French - [Métrique en Ligne](https://crisco2.unicaen.fr/verlaine/index.php?navigation=accueil) - [source](https://github.com/linhd-postdata/metrique-en-ligne) ### Italian - [Biblioteca italiana](https://github.com/linhd-postdata/biblioteca_italiana) - [source](http://www.bibliotecaitaliana.it/) ### Czech - [Corpus of Czech Verse](https://github.com/versotym/corpusCzechVerse) ### Portuguese - [Stichotheque](https://gitlab.com/stichotheque/stichotheque-pt) Also, we obtained the following corpora from these sources: ### Spanish - [Poesi.as](https://github.com/linhd-postdata/poesi.as) - [source](http://www.poesi.as/) ### English - [A Gutenberg Poetry Corpus](https://github.com/aparrish/gutenberg-poetry-corpus) ### Arabic - [Arabic Poetry dataset](https://www.kaggle.com/ahmedabelal/arabic-poetry) ### Chinese - [THU Chinese Classical Poetry Corpus](https://github.com/THUNLP-AIPoet/Datasets/tree/master/CCPC) ### Finnish - [SKVR](https://github.com/sks190/SKVR) ### German - [TextGrid Poetry Corpus](https://github.com/linhd-postdata/textgrid-poetry) - [source](https://textgrid.de/en/digitale-bibliothek) - [German Rhyme Corpus](https://github.com/tnhaider/german-rhyme-corpus) ### Hungarian - [verskorpusz](https://github.com/ELTE-DH/verskorpusz) ### Portuguese - [Poems in Portuguese](https://www.kaggle.com/oliveirasp6/poems-in-portuguese) ### Russian - [19 000 Russian poems](https://www.kaggle.com/grafstor/19-000-russian-poems) ## Team members - Álvaro Pérez ([alvp](https://huggingface.co/alvp)) - Javier de la Rosa ([versae](https://huggingface.co/versae)) - Aitor Díaz ([aitordiaz](https://huggingface.co/aitordiaz)) - Elena González-Blanco - Salvador Ros ([salva](https://huggingface.co/salva)) ## Useful links - [Community Week timeline](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104#summary-timeline-calendar-6) - [Community Week README](https://github.com/huggingface/transformers/blob/master/examples/research_projects/jax-projects/README.md) - [Community Week thread](https://discuss.huggingface.co/t/bertin-pretrain-roberta-large-from-scratch-in-spanish/7125) - [Community Week channel](https://discord.com/channels/858019234139602994/859113060068229190) - [Masked Language Modelling example scripts](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling) - [Model Repository](https://huggingface.co/flax-community/alberti-bert-base-multilingual-cased/) ## Acknowledgments This project would not have been possible without the infrastructure and resources provided by HuggingFace and Google Cloud. Moreover, we want to thank POSTDATA Project (ERC-StG-679528) and the Computational Literary Studies Infrastructure (CLS INFRA No. 101004984) of the European Union's Horizon 2020 research and innovation programme for their support and time allowance.
{"language": "es", "license": "cc-by-4.0", "tags": ["multilingual", "bert"], "pipeline_tag": "fill-mask", "widget": [{"text": "\u00bfQu\u00e9 es la vida? Un [MASK]."}]}
flax-community/alberti-bert-base-multilingual-cased
null
[ "transformers", "pytorch", "jax", "joblib", "safetensors", "bert", "fill-mask", "multilingual", "es", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
flax-community/arabic-gpt-j-6b
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# arabic-t5-small This is a T5v1.1 (small) trained on the concatenation of the Arabic Billion Words corpus and the Arabic subsets of the mC4 and Oscar datasets. The model could only be trained for about `10%` of the whole dataset due to time limitations. This is equivalent to `22'000` steps or about `4.3` Billion tokens. ## Training parameters | | | | :-------------------: | :-----------: | | Training batch size | `384` | | Evaluation batch size | `768` | | learning rate | `1e-2` | | dtype | `jnp.float32` | ## Preprocessing and the tokenizer We tried to keep the preprocessing to a bare minimum. We only replaced URLs, emails and social media user mentions with fixed tokens. Contrary to other pretrained Arabic LMs, we decided to not strip the Arabic diacritics and to keep them part of the vocabulary. The tokenizer was trained on `5%` of the training set, with a vocabulary size of `64'000`. For more details about preprocessing, check the [tokenizer code](https://huggingface.co/flax-community/arabic-t5-small/blob/main/t5_tokenizer_model.py) ## Data The model was trained on the concatenation of the Arabic Billion Words corpus and the Arabic subsets of the mC4 and Oscar datasets. A random `0.1%` subset of the data was reserved for evaluation and the rest for training. ## Results | | | | :-----------------: | :-----------: | | Evaluation accuracy | `56.84%` | | Evaluation Loss | `2.423` | | Training Loss | `2.392` | | Training Time | `22h 23m 51s` | ## Note for finetuning This model was pretrained with dropout turned off, so the default `dropout_rate` in the model config is `0`. To finetune the model dropout should be turned be back on, like this: ```python model = T5ForConditionalGeneration.from_pretrained("flax-community/arabic-t5-small", dropout_rate=0.1) ``` or, ```python model = AutoModelForSeq2SeqLM.from_pretrained("flax-community/arabic-t5-small", dropout_rate=0.1) ```
{"language": ["ar"], "datasets": ["mc4", "oscar", "arabic_billion_words"]}
flax-community/arabic-t5-small
null
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "safetensors", "t5", "text2text-generation", "ar", "dataset:mc4", "dataset:oscar", "dataset:arabic_billion_words", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# bengali-t5-base **bengali-t5-base** is a model trained on the Bengali portion of MT5 dataset. We used the `T5-base` model for this model. [Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google. The model is trained on around ~11B tokens (64 size batch, 512 tokens, 350k steps). ## load tokenizer ``` >>> tokenizer = transformers.AutoTokenizer.from_pretrained("flax-community/bengali-t5-base") >>> tokenizer.encode("আমি বাংলার গান গাই") >>> tokenizer.decode([93, 1912, 814, 5995, 3, 1]) ``` ``` [93, 1912, 814, 5995, 3, 1] 'আমি বাংলার গান গাই </s>' ``` ## load model ``` >>> config = T5Config.from_pretrained("flax-community/bengali-t5-base") >>> model = FlaxT5ForConditionalGeneration.from_pretrained("flax-community/bengali-t5-base", config=config) ``` The model is trained on `de-noising` objectives followed by the script [here](https://huggingface.co/flax-community/bengali-t5-base/blob/main/run_t5_mlm_flax.py) and [here](https://huggingface.co/flax-community/bengali-t5-base/blob/main/run.sh). Currently This model doesn't have any generation capability. If you want this model to have generation capability, please do a finetuning on `prefix-LM` objective mentioned in the [paper](https://arxiv.org/abs/1910.10683). See the tensorboard log in `Training metrics` tab. Please note that we haven't finetuned the model in any downstream task. ## Proposal - [Project Proposal](https://discuss.huggingface.co/t/pretrain-t5-from-scratch-in-bengali/7121) ## Participants - [Ibraheem Muhammad Moosa](https://huggingface.co/ibraheemmoosa) - [Tasnim Mohiuddin](https://huggingface.co/tasnim) - [Khalid Saifullah](https://huggingface.co/khalidsaifullaah) - [Tahsin Mayeesha](https://tahsin-mayeesha.github.io/) - [M Saiful Bari](https://huggingface.co/sbmaruf) ## Useful links - [Community Week timeline](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104#summary-timeline-calendar-6) - [Community Week README](https://github.com/huggingface/transformers/blob/master/examples/research_projects/jax-projects/README.md) - [Masked Language Modelling example scripts](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling) - [Model Repository](https://huggingface.co/flax-community/roberta-base-als-demo)
{}
flax-community/bengali-t5-base
null
[ "transformers", "pytorch", "jax", "tensorboard", "safetensors", "mt5", "text2text-generation", "arxiv:1910.10683", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
## BERT base-uncased for in Swahili This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) organized by [HuggingFace](https://huggingface.co). All training was done on a TPUv3-8 VM sponsored by the Google Cloud team. ## How to use ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("flax-community/bert-base-uncased-swahili") model = AutoModelForMaskedLM.from_pretrained("flax-community/bert-base-uncased-swahili") print(round((model.num_parameters())/(1000*1000)),"Million Parameters") 110 Million Parameters ``` #### **Training Data**: This model was trained on [Swahili Safi](https://huggingface.co/datasets/flax-community/swahili-safi) #### **More Details**: For more details and Demo please check [HF Swahili Space](https://huggingface.co/spaces/flax-community/Swahili)
{"language": "sw", "datasets": ["flax-community/swahili-safi"], "widget": [{"text": "Si kila mwenye makucha [MASK] simba."}]}
flax-community/bert-base-uncased-swahili
null
[ "transformers", "pytorch", "jax", "tensorboard", "bert", "fill-mask", "sw", "dataset:flax-community/swahili-safi", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
## Swahili News Classification with BERT This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) organized by [HuggingFace](https://huggingface.co). All training was done on a TPUv3-8 VM sponsored by the Google Cloud team. This [model](https://huggingface.co/flax-community/bert-base-uncased-swahili) was used as the base and fine-tuned for this task. ## How to use ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("flax-community/bert-swahili-news-classification") model = AutoModelForSequenceClassification.from_pretrained("flax-community/bert-swahili-news-classification") ``` ``` Eval metrics (10% valid set): {'accuracy': 0.9114740008594757} ```
{"language": "sw", "datasets": ["flax-community/swahili-safi"], "widget": [{"text": "Idris ameandika kwenye ukurasa wake wa Instagram akimkumbusha Diamond kutekeleza ahadi yake kumpigia Zari magoti kumuomba msamaha kama alivyowahi kueleza awali.Idris ameandika;"}]}
flax-community/bert-swahili-news-classification
null
[ "transformers", "pytorch", "jax", "tensorboard", "safetensors", "bert", "text-classification", "sw", "dataset:flax-community/swahili-safi", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
flax-community/bertin-roberta-large-es
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# NOTE: This repository is now superseded by https://huggingface.co/bertin-project/bertin-roberta-base-spanish. This model corresponds to the `beta` version of the model using stepwise over sampling trained for 200k steps with 128 sequence lengths. Version 1 is now available and should be used instead. # BERTIN BERTIN is a series of BERT-based models for Spanish. This one is a RoBERTa-large model trained from scratch on the Spanish portion of mC4 using [Flax](https://github.com/google/flax), including training scripts. This is part of the [Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organised by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google. ## Spanish mC4 The Spanish portion of mC4 containes about 416 million records and 235 billion words. ```bash $ zcat c4/multilingual/c4-es*.tfrecord*.json.gz | wc -l 416057992 ``` ```bash $ zcat c4/multilingual/c4-es*.tfrecord-*.json.gz | jq -r '.text | split(" ") | length' | paste -s -d+ - | bc 235303687795 ``` ## Team members - Javier de la Rosa ([versae](https://huggingface.co/versae)) - Eduardo González ([edugp](https://huggingface.co/edugp)) - Paulo Villegas ([paulo](https://huggingface.co/paulo)) - Pablo González de Prado ([Pablogps](https://huggingface.co/Pablogps)) - Manu Romero ([mrm8488](https://huggingface.co/)) - María Grandury ([mariagrandury](https://huggingface.co/)) ## Useful links - [Community Week timeline](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104#summary-timeline-calendar-6) - [Community Week README](https://github.com/huggingface/transformers/blob/master/examples/research_projects/jax-projects/README.md) - [Community Week thread](https://discuss.huggingface.co/t/bertin-pretrain-roberta-large-from-scratch-in-spanish/7125) - [Community Week channel](https://discord.com/channels/858019234139602994/859113060068229190) - [Masked Language Modelling example scripts](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling) - [Model Repository](https://huggingface.co/flax-community/bertin-roberta-large-spanish/)
{"language": "es", "license": "cc-by-4.0", "tags": ["spanish", "roberta"], "pipeline_tag": "fill-mask", "widget": [{"text": "Fui a la librer\u00eda a comprar un <mask>."}]}
flax-community/bertin-roberta-large-spanish
null
[ "transformers", "pytorch", "jax", "safetensors", "roberta", "fill-mask", "spanish", "es", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# BigBird base model BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. It is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this [paper](https://arxiv.org/abs/2007.14062) and first released in this [repository](https://github.com/google-research/bigbird). Disclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts. ## How to use `TODO: Update` Here is how to use this model to get the features of a given text in Flax: ```python from transformers import BigBirdTokenizer, FlaxBigBirdModel model_id = "flax-community/bigband" # by default its in `block_sparse` mode with num_random_blocks=3, block_size=64 model = FlaxBigBirdModel.from_pretrained(model_id) # you can change `attention_type` to full attention like this: model = FlaxBigBirdModel.from_pretrained(model_id, attention_type="original_full") # you can change `block_size` & `num_random_blocks` like this: model = FlaxBigBirdModel.from_pretrained(model_id, block_size=16, num_random_blocks=2) tokenizer = BigBirdTokenizer.from_pretrained(model_id) text = "Replace me by any text you'd like." inputs = tokenizer(text, return_tensors="jax") output = model(**inputs) ``` ## Training Data `TODO: Update` This model is pre-trained on four publicly available datasets: **Books**, **CC-News**, **Stories** and **Wikipedia**. It used same sentencepiece vocabulary as RoBERTa (which is in turn borrowed from GPT2). ## Training Procedure `TODO: Update` Document longer than 4096 were split into multiple documents and documents that were much smaller than 4096 were joined. Following the original BERT training, 15% of tokens were masked and model is trained to predict the mask. Model is warm started from RoBERTa’s checkpoint.
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia", "cc_news"]}
flax-community/bigband
null
[ "transformers", "big_bird", "pretraining", "en", "dataset:bookcorpus", "dataset:wikipedia", "dataset:cc_news", "arxiv:2007.14062", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# T5 model for sentence splitting in English Sentence Split is the task of dividing a long sentence into multiple sentences. E.g.: ``` Mary likes to play football in her freetime whenever she meets with her friends that are very nice people. ``` could be split into ``` Mary likes to play football in her freetime whenever she meets with her friends. ``` ``` Her friends are very nice people. ``` ## How to use it in your code: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("flax-community/byt5-base-wikisplit") model = AutoModelForSeq2SeqLM.from_pretrained("flax-community/byt5-base-wikisplit") complex_sentence = "This comedy drama is produced by Tidy , the company she co-founded in 2008 with her husband David Peet , who is managing director ." sample_tokenized = tokenizer(complex_sentence, return_tensors="pt") answer = model.generate(sample_tokenized['input_ids'], attention_mask = sample_tokenized['attention_mask'], max_length=256, num_beams=5) gene_sentence = tokenizer.decode(answer[0], skip_special_tokens=True) gene_sentence """ Output: This comedy drama is produced by Tidy. She co-founded Tidy in 2008 with her husband David Peet, who is managing director. """ ``` ## Datasets: [Wiki_Split](https://research.google/tools/datasets/wiki-split/) ## Current Basline from [paper](https://arxiv.org/abs/1907.12461) ![baseline](./baseline.png) ## Our Results: | Model | Exact | SARI | BLEU | | --- | --- | --- | --- | | [t5-base-wikisplit](https://huggingface.co/flax-community/t5-base-wikisplit) | 17.93 | 67.5438 | 76.9 | | [t5-v1_1-base-wikisplit](https://huggingface.co/flax-community/t5-v1_1-base-wikisplit) | 18.1207 | 67.4873 | 76.9478 | | [byt5-base-wikisplit](https://huggingface.co/flax-community/byt5-base-wikisplit) | 11.3582 | 67.2685 | 73.1682 | | [t5-large-wikisplit](https://huggingface.co/flax-community/t5-large-wikisplit) | 18.6632 | 68.0501 | 77.1881 |
{"datasets": ["wiki_split"], "widget": [{"text": "Mary likes to play football in her freetime whenever she meets with her friends that are very nice people."}]}
flax-community/byt5-base-wikisplit
null
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "safetensors", "t5", "text2text-generation", "dataset:wiki_split", "arxiv:1907.12461", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
flax-community/clip-italian
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
# Searching Reaction GIFs with CLIP ![header gif](./assets/huggingface_explode3.png) Reaction GIFs are an integral part of today's communication. They convey complex emotions with many levels, in a short compact format. If a picture is worth a thousand words then a GIF is worth more. We might even say that the level of complexity and expressiveness increases like: `Emoji < Memes/Image < GIFs` I think most people would agree it is not always easy to find the perfect reaction GIF. Although we started out with the more ambitious goal of GIF/Image generation we later settled on first finetuning CLIP. Which is needed to properly drive a generation model (like VQGAN). ![header gif](./assets/main.gif) Available CLIP models wouldn't be suitable to use without this finetuning as explained in the challenges below. ## 📝 Challenges Classic (Image,Text) tasks like, image search, caption generation all focus on cases where the text is a description of the image. This is mainly because large scale datasets available like COCO,WIT happen to be of that format. So it is interesting to see if models can also capture some more higher level relations. like sentiment-> image mapping, where there is great variation on both sides. We can think of reaction gif/images to be sentiment like, in fact the dataset we use was also gathered for sentiment analysis. There is no one correct reaction GIF, which also makes evaluation challenging. # Dataset We use the [Reaction GIF dataset](https://github.com/bshmueli/ReactionGIF) by Shmueli et al. Also available in [datasets](https://huggingface.co/datasets/julien-c/reactiongif)(without the gifs, but I already had those files, nice coincidence 😉) ![Dataset](./assets/dataset.png) We only use the tweet, GIF Response fields. For this short experiment we only used the first frame of the GIFs to get generate the (text,image) pairs. Thought about using multiple frames from the GIF, but since there may not be much change between frames and didn't know how that would effect the already overfitting model. I decided to go with the simpler single frame version. As opposed to other datasets, like COCO, we don't have multiple captions per image. Although same GIFs are used multiple times and we can modify the dataset that way we chose not to. Since we already had a very small dataset. - Domain: Twitter - Train size: 18,976 - Validation size: 351 # Model We used the Hybrid CLIP model. Training script: [Hybrid CLIP](https://github.com/huggingface/transformers/blob/master/examples/research_projects/jax-projects/hybrid_clip/run_hybrid_clip.py) - Text Model: [twitter-roberta-base-emoji](https://huggingface.co/cardiffnlp/twitter-roberta-base-emoji) - Vision Model: [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) ## Different models tried `cardiffnlp/twitter-roberta-base-*` models performed best compared to other text models; like the plain `roberta-base`, `bert-base-cased`. This makes sense since this model is already trained on twitter data. And among `cardiffnlp/twitter-roberta-base-*` models `twitter-roberta-base-emoji` performed best. (As I had hoped) This model is `cardiffnlp/twitter-roberta-base` further fine-tuned on emoji classification task, which we can say (emojis) is a parallel to GIFs. Also tried `google/vit-base-patch32-384`, `google/vit-base-patch16-384` for the vision models, but results were inconclusive. ## Result Interpretation (Warning) It would be wrong to claim that this model learned 'semantic reasoning' between sentence and image features. It more likely learned a mapping between sentence sentiment and image occurrence statistics. Because the set of gif images repeat across the dataset, although paired with different sentences. That is not to say that learning such semantic relations isn't feasible with this model. And it is well worth working on in the future, with a larger and better constructed dataset. ### 📈 Training Logs Training logs can be found [here](https://wandb.ai/cceyda/flax-clip?workspace=user-cceyda) It was really easy to overfit since it was a tiny dataset. Used early stopping. Other parameters: ``` --max_seq_length 128 \ --per_device_train_batch_size="32" \ --learning_rate="1e-5" --warmup_steps="150" ``` # 💡 Future Potential It is possible to generate a very large training set by scraping twitter.(Couldn't do during the event because of twitter rate limit) I found it surprising how well the results turned out to be with so little data and training time. Although it is really hard to define what is an appropriate reaction image, and there are definite mistakes model makes. (I also trained just a plain clip but didn't have time to prep demo for that 😅 which was also similarly over fitting and only had time to do a single run) I will definitely be trying out training a similar model for emoji & meme data. Training CLIP is just the first step, if we have a well trained CLIP generation is within reach 🚀 # How to use The final model available [here](https://huggingface.co/ceyda/clip-reply) ```py from model import FlaxHybridCLIP # see demo from transformers import AutoTokenizer, CLIPProcessor model = FlaxHybridCLIP.from_pretrained("ceyda/clip-reply") processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") processor.tokenizer = AutoTokenizer.from_pretrained("cardiffnlp/twitter-roberta-base") def query(image_paths,query_text): images = [Image.open(im).convert("RGB") for im in image_paths] inputs = processor(text=[query_text], images=images, return_tensors="jax", padding=True) inputs["pixel_values"] = jnp.transpose(inputs["pixel_values"], axes=[0, 2, 3, 1]) outputs = model(**inputs) logits_per_image = outputs.logits_per_image.reshape(-1) probs = jax.nn.softmax(logits_per_image) ``` # Created By Ceyda Cinarel [@ceyda](https://huggingface.co/ceyda) Made during the flax community [event](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104/58) # TL;DR The task Input: Some sentence (like a tweet) Output: The most suitable reaction GIF image (Ranking) Example: - Input: I miss you - Output: ![hug](./assets/example_gif.jpg) # Demo https://huggingface.co/spaces/flax-community/clip-reply-demo
{}
flax-community/clip-reply
null
[ "tensorboard", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
zero-shot-image-classification
transformers
# Model Card: clip-rsicd ## Model Details This model is a fine-tuned [CLIP by OpenAI](https://huggingface.co/openai/clip-vit-base-patch32). It is designed with an aim to improve zero-shot image classification, text-to-image and image-to-image retrieval specifically on remote sensing images. ### Model Date July 2021 ### Model Type The base model uses a ViT-B/32 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. ### Model Version We release several checkpoints for `clip-rsicd` model. Refer to [our github repo](https://github.com/arampacha/CLIP-rsicd#evaluation-results) for performance metrics on zero-shot classification for each of those. ### Training To reproduce the fine-tuning procedure one can use released [script](https://github.com/arampacha/CLIP-rsicd/blob/master/run_clip_flax_tv.py). The model was trained using batch size 1024, adafactor optimizer with linear warmup and decay with peak learning rate 1e-4 on 1 TPU-v3-8. Full log of the training run can be found on [WandB](https://wandb.ai/wandb/hf-flax-clip-rsicd/runs/2dj1exsw). ### Demo Check out the model text-to-image and image-to-image capabilities using [this demo](https://huggingface.co/spaces/sujitpal/clip-rsicd-demo). ### Documents - [Fine-tuning CLIP on RSICD with HuggingFace and flax/jax on colab using TPU](https://colab.research.google.com/github/arampacha/CLIP-rsicd/blob/master/nbs/Fine_tuning_CLIP_with_HF_on_TPU.ipynb) ### Use with Transformers ```python from PIL import Image import requests from transformers import CLIPProcessor, CLIPModel model = CLIPModel.from_pretrained("flax-community/clip-rsicd-v2") processor = CLIPProcessor.from_pretrained("flax-community/clip-rsicd-v2") url = "https://raw.githubusercontent.com/arampacha/CLIP-rsicd/master/data/stadium_1.jpg" image = Image.open(requests.get(url, stream=True).raw) labels = ["residential area", "playground", "stadium", "forest", "airport"] inputs = processor(text=[f"a photo of a {l}" for l in labels], images=image, return_tensors="pt", padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image # this is the image-text similarity score probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities for l, p in zip(labels, probs[0]): print(f"{l:<16} {p:.4f}") ``` [Try it on colab](https://colab.research.google.com/github/arampacha/CLIP-rsicd/blob/master/nbs/clip_rsicd_zero_shot.ipynb) ## Model Use ### Intended Use The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. In addition, we can imagine applications in defense and law enforcement, climate change and global warming, and even some consumer applications. A partial list of applications can be found [here](https://github.com/arampacha/CLIP-rsicd#applications). In general we think such models can be useful as digital assistants for humans engaged in searching through large collections of images. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. #### Primary intended uses The primary intended users of these models are AI researchers. We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models. ## Data The model was trained on publicly available remote sensing image captions datasets. Namely [RSICD](https://github.com/201528014227051/RSICD_optimal), [UCM](https://mega.nz/folder/wCpSzSoS#RXzIlrv--TDt3ENZdKN8JA) and [Sydney](https://mega.nz/folder/pG4yTYYA#4c4buNFLibryZnlujsrwEQ). More information on the datasets used can be found on [our project page](https://github.com/arampacha/CLIP-rsicd#dataset). ## Performance and Limitations ### Performance | Model-name | k=1 | k=3 | k=5 | k=10 | | -------------------------------- | ----- | ----- | ----- | ----- | | original CLIP | 0.572 | 0.745 | 0.837 | 0.939 | | clip-rsicd-v2 (this model) | **0.883** | **0.968** | **0.982** | **0.998** | ## Limitations The model is fine-tuned on RSI data but can contain some biases and limitations of the original CLIP model. Refer to [CLIP model card](https://huggingface.co/openai/clip-vit-base-patch32#limitations) for details on those.
{"tags": ["vision"]}
flax-community/clip-rsicd-v2
null
[ "transformers", "pytorch", "jax", "clip", "zero-shot-image-classification", "vision", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
zero-shot-image-classification
transformers
{}
flax-community/clip-rsicd-v3
null
[ "transformers", "pytorch", "jax", "clip", "zero-shot-image-classification", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
zero-shot-image-classification
transformers
{}
flax-community/clip-rsicd-v4
null
[ "transformers", "pytorch", "jax", "clip", "zero-shot-image-classification", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
zero-shot-image-classification
transformers
# Model Card: clip-rsicd ## Model Details This model is a finetuned [CLIP by OpenAI](https://huggingface.co/openai/clip-vit-base-patch32). It is designed with an aim to improve zero-shot image classification, text-to-image and image-to-image retrieval specifically on remote sensing images. ### Model Date July 2021 ### Model Type The base model uses a ViT-B/32 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. ### Model Version We release several checkpoints for `clip-rsicd` model. Refer to [our github repo](https://github.com/arampacha/CLIP-rsicd#evaluation-results) for performance metrics on zero-shot classification for each of those. ### Training To reproduce the fine-tuning procedure one can use released [script](https://github.com/arampacha/CLIP-rsicd/blob/master/run_clip_flax_tv.py). The model was trained using batch size 1024, adafactor optimizer with linear warmup and decay with peak learning rate 1e-4 on 1 TPU-v3-8. Full log of the training run can be found on [WandB](https://wandb.ai/wandb/hf-flax-clip-rsicd/runs/1ts243k3). ### Demo Check out the model text-to-image and image-to-image capabilities using [this demo](https://huggingface.co/spaces/sujitpal/clip-rsicd-demo). ### Documents - [Fine-tuning CLIP on RSICD with HuggingFace and flax/jax on colab using TPU](https://colab.research.google.com/github/arampacha/CLIP-rsicd/blob/master/nbs/Finetuning_CLIP_with_HF_and_jax.ipynb) ### Use with Transformers ```py from PIL import Image import requests from transformers import CLIPProcessor, CLIPModel model = CLIPModel.from_pretrained("flax-community/clip-rsicd") processor = CLIPProcessor.from_pretrained("flax-community/clip-rsicd") url = "https://raw.githubusercontent.com/arampacha/CLIP-rsicd/master/data/stadium_1.jpg" image = Image.open(requests.get(url, stream=True).raw) labels = ["residential area", "playground", "stadium", "forrest", "airport"] inputs = processor(text=[f"a photo of a {l}" for l in labels], images=image, return_tensors="pt", padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image # this is the image-text similarity score probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities for l, p in zip(labels, probs[0]): print(f"{l:<16} {p:.4f}") ``` [Try it on colab](https://colab.research.google.com/github/arampacha/CLIP-rsicd/blob/master/nbs/clip_rsicd_zero_shot.ipynb) ## Model Use ### Intended Use The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. In addition, we can imagine applications in defense and law enforcement, climate change and global warming, and even some consumer applications. A partial list of applications can be found [here](https://github.com/arampacha/CLIP-rsicd#applications). In general we think such models can be useful as digital assistants for humans engaged in searching through large collections of images. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. #### Primary intended uses The primary intended users of these models are AI researchers. We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models. ## Data The model was trained on publicly available remote sensing image captions datasets. Namely [RSICD](https://github.com/201528014227051/RSICD_optimal), [UCM](https://mega.nz/folder/wCpSzSoS#RXzIlrv--TDt3ENZdKN8JA) and [Sydney](https://mega.nz/folder/pG4yTYYA#4c4buNFLibryZnlujsrwEQ). More information on the datasets used can be found on [our project page](https://github.com/arampacha/CLIP-rsicd#dataset). ## Performance and Limitations ### Performance | Model-name | k=1 | k=3 | k=5 | k=10 | | -------------------------------- | ----- | ----- | ----- | ----- | | original CLIP | 0.572 | 0.745 | 0.837 | 0.939 | | clip-rsicd (this model) | 0.843 | 0.958 | 0.977 | 0.993 | ## Limitations The model is finetuned on RSI data but can contain some biases and limitations of the original CLIP model. Refer to [CLIP model card](https://huggingface.co/openai/clip-vit-base-patch32#limitations) for details on those.
{"tags": ["vision"]}
flax-community/clip-rsicd
null
[ "transformers", "pytorch", "jax", "clip", "zero-shot-image-classification", "vision", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# CLIP-Spanish CLIP Spanish is a CLIP-like model for Spanish language. It is composed of [BERTIN](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) as a language encoder and the ViT-B/32 image encoder from [CLIP](https://huggingface.co/openai/clip-vit-base-patch32). The model is implemented in [Flax](https://github.com/google/flax), including training scripts (see `training.md`). This is part of the [Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organised by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google. ## Spanish WIT We used a subset of 141,230 Spanish captions from the [WIT dataset](https://github.com/google-research-datasets/wit) for training. ## Team members - Eduardo González Ponferrada ([edugp](https://huggingface.co/edugp)) - Manu Romero ([mrm8488](https://huggingface.co/)) - María Grandury ([mariagrandury](https://huggingface.co/)) ## Useful links - [Community Week timeline](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104#summary-timeline-calendar-6) - [Community Week README](https://github.com/huggingface/transformers/blob/master/examples/research_projects/jax-projects/README.md) - [Community Week thread](https://discuss.huggingface.co/t/bertin-pretrain-roberta-large-from-scratch-in-spanish/7125) - [Community Week channel](https://discord.com/channels/858019234139602994/859113060068229190) - [Hybrid CLIP example scripts](https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects/hybrid_clip) - [Model Repository](https://huggingface.co/flax-community/bertin-roberta-large-spanish/)
{"language": "es", "license": "cc-by-4.0", "tags": ["spanish", "roberta", "vit"]}
flax-community/clip-spanish
null
[ "transformers", "jax", "hybrid-clip", "spanish", "roberta", "vit", "es", "license:cc-by-4.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# CLIP-Vision-BERT Multilingual Pre-trained Model Pretrained CLIP-Vision-BERT pre-trained on translated [Conceptual-12M](https://github.com/google-research-datasets/conceptual-12m) image-text pairs using a masked language modeling (MLM) objective. 10M cleaned image-text pairs are translated using [mBART-50 one-to-many model](https://huggingface.co/facebook/mbart-large-50-one-to-many-mmt) to 2.5M examples each in English, French, German and Spanish. This model is based on the VisualBERT which was introduced in [this paper](https://arxiv.org/abs/1908.03557) and first released in [this repository](https://github.com/uclanlp/visualbert). We trained CLIP-Vision-BERT model during community week hosted by Huggingface 🤗 using JAX/Flax. This checkpoint is pre-trained for 60k steps. ## Model description CLIP-Vision-BERT is a modified BERT model which takes in visual embeddings from CLIP-Vision transformer and concatenates them with BERT textual embeddings before passing them to the self-attention layers of BERT. This is done for deep cross-modal interaction between the two modes. ## Intended uses & limitations❗️ You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks such as visuo-linguistic sequence classification or visual question answering. We used this model to fine-tuned on a multi-translated version of the visual question answering task - [VQA v2](https://visualqa.org/challenge.html). Since Conceptual-12M is a dataset scraped from the internet, it will involve some biases which will also affect all fine-tuned versions of this model. ### How to use❓ You can use this model directly with a pipeline for masked language modeling. You will need to clone the model from [here](https://github.com/gchhablani/multilingual-vqa). An example of usage is shown below: ```python >>> from torchvision.io import read_image >>> import numpy as np >>> import os >>> from transformers import CLIPProcessor, BertTokenizerFast >>> from model.flax_clip_vision_bert.modeling_clip_vision_bert import FlaxCLIPVisionBertForMaskedLM >>> image_path = os.path.join('images/val2014', os.listdir('images/val2014')[0]) >>> img = read_image(image_path) >>> clip_processor = CLIPProcessor.from_pretrained('openai/clip-vit-base-patch32') ftfy or spacy is not installed using BERT BasicTokenizer instead of ftfy. >>> clip_outputs = clip_processor(images=img) >>> clip_outputs['pixel_values'][0] = clip_outputs['pixel_values'][0].transpose(1,2,0) # Need to transpose images as model expected channel last images. >>> tokenizer = BertTokenizerFast.from_pretrained('bert-base-multilingual-uncased') >>> model = FlaxCLIPVisionBertForMaskedLM.from_pretrained('flax-community/clip-vision-bert-cc12m-60k') >>> text = "Three teddy [MASK] in a showcase." >>> tokens = tokenizer([text], return_tensors="np") >>> pixel_values = np.concatenate([clip_outputs['pixel_values']]) >>> outputs = model(pixel_values=pixel_values, **tokens) >>> indices = np.where(tokens['input_ids']==tokenizer.mask_token_id) >>> preds = outputs.logits[indices][0] >>> sorted_indices = np.argsort(preds)[::-1] # Get reverse sorted scores /home/crocoder/anaconda3/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py:4615: UserWarning: 'kind' argument to argsort is ignored. warnings.warn("'kind' argument to argsort is ignored.") >>> top_5_indices = sorted_indices[:5] >>> top_5_tokens = tokenizer.convert_ids_to_tokens(top_5_indices) >>> top_5_scores = preds[top_5_indices] >>> print(dict(zip(top_5_tokens, top_5_scores))) {'bears': 19.241959, 'bear': 17.700356, 'animals': 14.368396, 'girls': 14.343797, 'dolls': 14.274415} ``` ## Training data 🏋🏻‍♂️ The CLIP-Vision-BERT model was pre-trained on a translated version of the Conceptual-12m dataset in four languages using mBART-50: English, French, German and Spanish, with 2.5M image-text pairs in each. The dataset captions and image urls can be downloaded from [flax-community/conceptual-12m-mbart-50-translated](https://huggingface.co/datasets/flax-community/conceptual-12m-mbart-50-multilingual). ## Data Cleaning 🧹 Though the original dataset contains 12M image-text pairs, a lot of the URLs are invalid now, and in some cases, images are corrupt or broken. We remove such examples from our data, which leaves us with approximately 10M image-text pairs. **Splits** We used 99% of the 10M examples as a train set, and the remaining ~ 100K examples as our validation set. ## Training procedure 👨🏻‍💻 ### Preprocessing The texts are lowercased and tokenized using WordPiece and a shared vocabulary size of approximately 110,000. The beginning of a new document is marked with `[CLS]` and the end of one by `[SEP]` The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. The visual embeddings are taken from the CLIP-Vision model and combined with the textual embeddings inside the BERT embedding layer. The padding is done in the middle. Here is an example of what the embeddings look like: ``` [CLS Emb] [Textual Embs] [SEP Emb] [Pad Embs] [Visual Embs] ``` A total length of 128 tokens, including the visual embeddings, is used. The texts are truncated or padded accordingly. ### Pretraining The checkpoint of the model was trained on Google Cloud Engine TPUv3-8 machine (with 335 GB of RAM, 1000 GB of hard drive, 96 CPU cores) **8 v3 TPU cores** for 60k steps with a per device batch size of 64 and a max sequence length of 128. The optimizer used is Adafactor with a learning rate of 1e-4, learning rate warmup for 5,000 steps, and linear decay of the learning rate after. We tracked experiments using TensorBoard. Here is the link to the main dashboard: [CLIP Vision BERT CC12M Pre-training Dashboard](https://huggingface.co/flax-community/multilingual-vqa-pt-ckpts/tensorboard) #### **Pretraining Results 📊** The model at this checkpoint reached **eval accuracy of 67.53%** and **with train loss at 1.793 and eval loss at 1.724**. ## Fine Tuning on downstream tasks We performed fine-tuning on downstream tasks. We used the following datasets for visual question answering: 1. Multilingual of [Visual Question Answering (VQA) v2](https://visualqa.org/challenge.html) - We translated this dataset to the four languages using `Helsinki-NLP` Marian models. The translated data can be found at [flax-community/multilingual-vqa](https://huggingface.co/datasets/flax-community/multilingual-vqa). The checkpoints for the fine-tuned model on this pre-trained checkpoint can be found [here](https://huggingface.co/flax-community/multilingual-vqa-pt-60k-ft/tensorboard). The fine-tuned model achieves eval accuracy of 49% on our validation dataset. ## Team Members - Gunjan Chhablani [@gchhablani](https://hf.co/gchhablani) - Bhavitvya Malik[@bhavitvyamalik](https://hf.co/bhavitvyamalik) ## Acknowledgements We thank [Nilakshan Kunananthaseelan](https://huggingface.co/knilakshan20) for helping us whenever he could get a chance. We also thank [Abheesht Sharma](https://huggingface.co/abheesht) for helping in the discussions in the initial phases. [Luke Melas](https://github.com/lukemelas) helped us get the CC-12M data on our TPU-VMs and we are very grateful to him. This project would not be possible without the help of [Patrick](https://huggingface.co/patrickvonplaten) and [Suraj](https://huggingface.co/valhalla) who met with us frequently and helped review our approach and guided us throughout the project. Huge thanks to Huggingface 🤗 & Google Jax/Flax team for such a wonderful community week and for answering our queries on the Slack channel, and for providing us with the TPU-VMs. <img src=https://pbs.twimg.com/media/E443fPjX0AY1BsR.jpg:large>
{}
flax-community/clip-vision-bert-cc12m-60k
null
[ "transformers", "jax", "clip-vision-bert", "fill-mask", "arxiv:1908.03557", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# CLIP-Vision-BERT Multilingual Pre-trained Model Pretrained CLIP-Vision-BERT pre-trained on translated [Conceptual-12M](https://github.com/google-research-datasets/conceptual-12m) image-text pairs using a masked language modeling (MLM) objective. 10M cleaned image-text pairs are translated using [mBART-50 one-to-many model](https://huggingface.co/facebook/mbart-large-50-one-to-many-mmt) to 2.5M examples each in English, French, German and Spanish. This model is based on the VisualBERT which was introduced in [this paper](https://arxiv.org/abs/1908.03557) and first released in [this repository](https://github.com/uclanlp/visualbert). We trained CLIP-Vision-BERT model during community week hosted by Huggingface 🤗 using JAX/Flax. This checkpoint is pre-trained for 70k steps. ## Model description CLIP-Vision-BERT is a modified BERT model which takes in visual embeddings from CLIP-Vision transformer and concatenates them with BERT textual embeddings before passing them to the self-attention layers of BERT. This is done for deep cross-modal interaction between the two modes. ## Intended uses & limitations❗️ You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks such as visuo-linguistic sequence classification or visual question answering. We used this model to fine-tuned on a multi-translated version of the visual question answering task - [VQA v2](https://visualqa.org/challenge.html). Since Conceptual-12M is a dataset scraped from the internet, it will involve some biases which will also affect all fine-tuned versions of this model. ### How to use❓ You can use this model directly with a pipeline for masked language modeling. You will need to clone the model from [here](https://github.com/gchhablani/multilingual-vqa). An example of usage is shown below: ```python >>> from torchvision.io import read_image >>> import numpy as np >>> import os >>> from transformers import CLIPProcessor, BertTokenizerFast >>> from model.flax_clip_vision_bert.modeling_clip_vision_bert import FlaxCLIPVisionBertForMaskedLM >>> image_path = os.path.join('images/val2014', os.listdir('images/val2014')[0]) >>> img = read_image(image_path) >>> clip_processor = CLIPProcessor.from_pretrained('openai/clip-vit-base-patch32') ftfy or spacy is not installed using BERT BasicTokenizer instead of ftfy. >>> clip_outputs = clip_processor(images=img) >>> clip_outputs['pixel_values'][0] = clip_outputs['pixel_values'][0].transpose(1,2,0) # Need to transpose images as model expected channel last images. >>> tokenizer = BertTokenizerFast.from_pretrained('bert-base-multilingual-uncased') >>> model = FlaxCLIPVisionBertForMaskedLM.from_pretrained('flax-community/clip-vision-bert-cc12m-70k') >>> text = "Three teddy [MASK] in a showcase." >>> tokens = tokenizer([text], return_tensors="np") >>> pixel_values = np.concatenate([clip_outputs['pixel_values']]) >>> outputs = model(pixel_values=pixel_values, **tokens) >>> indices = np.where(tokens['input_ids']==tokenizer.mask_token_id) >>> preds = outputs.logits[indices][0] >>> sorted_indices = np.argsort(preds)[::-1] # Get reverse sorted scores >>> top_5_indices = sorted_indices[:5] >>> top_5_tokens = tokenizer.convert_ids_to_tokens(top_5_indices) >>> top_5_scores = preds[top_5_indices] >>> print(dict(zip(top_5_tokens, top_5_scores))) {'bears': 19.400345, 'bear': 17.866995, 'animals': 14.453735, 'dogs': 14.427426, 'girls': 14.097499} ``` ## Training data 🏋🏻‍♂️ The CLIP-Vision-BERT model was pre-trained on a translated version of the Conceptual-12m dataset in four languages using mBART-50: English, French, German and Spanish, with 2.5M image-text pairs in each. The dataset captions and image urls can be downloaded from [flax-community/conceptual-12m-mbart-50-translated](https://huggingface.co/datasets/flax-community/conceptual-12m-mbart-50-multilingual). ## Data Cleaning 🧹 Though the original dataset contains 12M image-text pairs, a lot of the URLs are invalid now, and in some cases, images are corrupt or broken. We remove such examples from our data, which leaves us with approximately 10M image-text pairs. **Splits** We used 99% of the 10M examples as a train set, and the remaining ~ 100K examples as our validation set. ## Training procedure 👨🏻‍💻 ### Preprocessing The texts are lowercased and tokenized using WordPiece and a shared vocabulary size of approximately 110,000. The beginning of a new document is marked with `[CLS]` and the end of one by `[CLS]` The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. The visual embeddings are taken from the CLIP-Vision model and combined with the textual embeddings inside the BERT embedding layer. The padding is done in the middle. Here is an example of what the embeddings look like: ``` [CLS Emb] [Textual Embs] [SEP Emb] [Pad Embs] [Visual Embs] ``` A total length of 128 tokens, including the visual embeddings, is used. The texts are truncated or padded accordingly. ### Pretraining The checkpoint of the model was trained on Google Cloud Engine TPUv3-8 machine (with 335 GB of RAM, 1000 GB of hard drive, 96 CPU cores) **8 v3 TPU cores** for 70k steps with a per device batch size of 64 and a max sequence length of 128. The optimizer used is Adafactor with a learning rate of 1e-4, learning rate warmup for 1,000 steps, and linear decay of the learning rate after. We tracked experiments using TensorBoard. Here is the link to the main dashboard: [CLIP Vision BERT CC12M Pre-training Dashboard](https://huggingface.co/flax-community/multilingual-vqa-pt-ckpts/tensorboard) #### **Pretraining Results 📊** The model at this checkpoint reached **eval accuracy of 67.85%** and **with train loss at 1.756 and eval loss at 1.706**. ## Team Members - Gunjan Chhablani [@gchhablani](https://hf.co/gchhablani) - Bhavitvya Malik[@bhavitvyamalik](https://hf.co/bhavitvyamalik) ## Acknowledgements We thank [Nilakshan Kunananthaseelan](https://huggingface.co/knilakshan20) for helping us whenever he could get a chance. We also thank [Abheesht Sharma](https://huggingface.co/abheesht) for helping in the discussions in the initial phases. [Luke Melas](https://github.com/lukemelas) helped us get the CC-12M data on our TPU-VMs and we are very grateful to him. This project would not be possible without the help of [Patrick](https://huggingface.co/patrickvonplaten) and [Suraj](https://huggingface.co/valhalla) who met with us frequently and helped review our approach and guided us throughout the project. Huge thanks to Huggingface 🤗 & Google Jax/Flax team for such a wonderful community week and for answering our queries on the Slack channel, and for providing us with the TPU-VMs. <img src=https://pbs.twimg.com/media/E443fPjX0AY1BsR.jpg:large>
{}
flax-community/clip-vision-bert-cc12m-70k
null
[ "transformers", "jax", "clip-vision-bert", "fill-mask", "arxiv:1908.03557", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# CLIP-Vision-BERT Multilingual VQA Model Fine-tuned CLIP-Vision-BERT on translated [VQAv2](https://visualqa.org/challenge.html) image-text pairs using sequence classification objective. We translate the dataset to three other languages other than English: French, German, and Spanish using the [MarianMT Models](https://huggingface.co/transformers/model_doc/marian.html). This model is based on the VisualBERT which was introduced in [this paper](https://arxiv.org/abs/1908.03557) and first released in [this repository](https://github.com/uclanlp/visualbert). The output is 3129 class logits, the same classes as used by VisualBERT authors. The initial weights are loaded from the Conceptual-12M 60k [checkpoints](https://huggingface.co/flax-community/clip-vision-bert-cc12m-60k). We trained the CLIP-Vision-BERT VQA model during community week hosted by Huggingface 🤗 using JAX/Flax. ## Model description CLIP-Vision-BERT is a modified BERT model which takes in visual embeddings from the CLIP-Vision transformer and concatenates them with BERT textual embeddings before passing them to the self-attention layers of BERT. This is done for deep cross-modal interaction between the two modes. ## Intended uses & limitations❗️ This model is fine-tuned on a multi-translated version of the visual question answering task - [VQA v2](https://visualqa.org/challenge.html). Since VQAv2 is a dataset scraped from the internet, it will involve some biases which will also affect all fine-tuned versions of this model. ### How to use❓ You can use this model directly on visual question answering. You will need to clone the model from [here](https://github.com/gchhablani/multilingual-vqa). An example of usage is shown below: ```python >>> from torchvision.io import read_image >>> import numpy as np >>> import os >>> from transformers import CLIPProcessor, BertTokenizerFast >>> from model.flax_clip_vision_bert.modeling_clip_vision_bert import FlaxCLIPVisionBertForSequenceClassification >>> image_path = os.path.join('images/val2014', os.listdir('images/val2014')[0]) >>> img = read_image(image_path) >>> clip_processor = CLIPProcessor.from_pretrained('openai/clip-vit-base-patch32') ftfy or spacy is not installed using BERT BasicTokenizer instead of ftfy. >>> clip_outputs = clip_processor(images=img) >>> clip_outputs['pixel_values'][0] = clip_outputs['pixel_values'][0].transpose(1,2,0) # Need to transpose images as model expected channel last images. >>> tokenizer = BertTokenizerFast.from_pretrained('bert-base-multilingual-uncased') >>> model = FlaxCLIPVisionBertForSequenceClassification.from_pretrained('flax-community/clip-vision-bert-vqa-ft-6k') >>> text = "Are there teddy bears in the image?" >>> tokens = tokenizer([text], return_tensors="np") >>> pixel_values = np.concatenate([clip_outputs['pixel_values']]) >>> outputs = model(pixel_values=pixel_values, **tokens) >>> preds = outputs.logits[0] >>> sorted_indices = np.argsort(preds)[::-1] # Get reverse sorted scores >>> top_5_indices = sorted_indices[:5] >>> top_5_tokens = list(map(model.config.id2label.get,top_5_indices)) >>> top_5_scores = preds[top_5_indices] >>> print(dict(zip(top_5_tokens, top_5_scores))) {'yes': 15.809224, 'no': 7.8785815, '<unk>': 4.622649, 'very': 4.511462, 'neither': 3.600822} ``` ## Training data 🏋🏻‍♂️ The CLIP-Vision-BERT model was fine-tuned on the translated version of the VQAv2 dataset in four languages using Marian: English, French, German and Spanish. Hence, the dataset is four times the original English questions. The dataset questions and image URLs/paths can be downloaded from [flax-community/multilingual-vqa](https://huggingface.co/datasets/flax-community/multilingual-vqa). ## Data Cleaning 🧹 Though the original dataset contains 443,757 train and 214,354 validation image-question pairs. We only use the `multiple_choice_answer`. The answers which are not present in the 3129 classes are mapped to the `<unk>` label. **Splits** We use the original train-val splits from the VQAv2 dataset. After translation, we get 1,775,028 train image-text pairs, and 857,416 validation image-text pairs. ## Training procedure 👨🏻‍💻 ### Preprocessing The texts are lowercased and tokenized using WordPiece and a shared vocabulary size of approximately 110,000. The beginning of a new document is marked with `[CLS]` and the end of one by `[SEP]`. ### Fine-tuning The checkpoint of the model was trained on Google Cloud Engine TPUv3-8 machine (with 335 GB of RAM, 1000 GB of hard drive, 96 CPU cores) **8 v3 TPU cores** for 6k steps with a per device batch size of 128 and a max sequence length of 128. The optimizer used is AdamW with a learning rate of 5e-5, learning rate warmup for 1600 steps, and linear decay of the learning rate after. We tracked experiments using TensorBoard. Here is link to main dashboard: [CLIP Vision BERT VQAv2 Fine-tuning Dashboard](https://huggingface.co/flax-community/multilingual-vqa-pt-60k-ft/tensorboard) #### **Fine-tuning Results 📊** The model at this checkpoint reached **eval accuracy of 0.49** on our multilingual VQAv2 dataset. ## Team Members - Gunjan Chhablani [@gchhablani](https://hf.co/gchhablani) - Bhavitvya Malik[@bhavitvyamalik](https://hf.co/bhavitvyamalik) ## Acknowledgements We thank [Nilakshan Kunananthaseelan](https://huggingface.co/knilakshan20) for helping us whenever he could get a chance. We also thank [Abheesht Sharma](https://huggingface.co/abheesht) for helping in the discussions in the initial phases. [Luke Melas](https://github.com/lukemelas) helped us get the CC-12M data on our TPU-VMs and we are very grateful to him. This project would not be possible without the help of [Patrick](https://huggingface.co/patrickvonplaten) and [Suraj](https://huggingface.co/valhalla) who met with us frequently and helped review our approach and guided us throughout the project. Huge thanks to Huggingface 🤗 & Google Jax/Flax team for such a wonderful community week and for answering our queries on the Slack channel, and for providing us with the TPU-VMs. <img src=https://pbs.twimg.com/media/E443fPjX0AY1BsR.jpg:large>
{}
flax-community/clip-vision-bert-vqa-ft-6k
null
[ "transformers", "jax", "clip-vision-bert", "text-classification", "arxiv:1908.03557", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# CLIP-Vision-Marian Seq2Seq Encoder-Decoder Model Pretrained CLIP-Vision-Marian pre-trained on a subset of Spanish-translated Conceptual-12M image-text pairs using a seq2seq model training objective. 2.5M cleaned English image-text pairs are translated using Spanish Marian Model. We trained CLIP-Vision-Marian model during community week hosted by Huggingface 🤗 using JAX/Flax. ## Model description CLIP-Vision-Marian is a modified transformers model which takes in visual embeddings from CLIP-Vision transformer and feeds into the `encoder_hidden_states` of a Marian decoder. This is done for deep cross-modal interaction via `cross-attention` between the two modes. The decoder then predicts logits for the `input_ids` provided and can be used for generation. ## Intended uses & limitations❗️ You can use the raw model for encoder-decoder network where you want the encoder to encode images and the decoder to decode text. Note that this model is primarily aimed at being fine-tuned on tasks like Spanish image captioning. ### How to use❓ You will need to clone the model from [here](https://github.com/bhavitvyamalik/spanish-image-captioning). An example of usage is shown below: ```python >>> from torchvision.io import read_image >>> import numpy as np >>> import wget >>> import os >>> from transformers import CLIPProcessor, MarianTokenizer >>> from models.flax_clip_vision_marian.modeling_clip_vision_marian import FlaxCLIPVisionMarianMT img = wget.download("https://huggingface.co/streamlitiframe/flax-community/spanish-image-captioning/+/media/55a8898e61131569cc0ed4e72a8b3092969d63c2dff4f47ed9ef0d89.jpeg") >>> img = read_image(img) # reading image >>> clip_processor = CLIPProcessor.from_pretrained('flax-community/clip-vit-base-patch32_marian') >>> clip_outputs = clip_processor(images=img) >>> clip_outputs['pixel_values'][0] = clip_outputs['pixel_values'][0].transpose(1,2,0) # Need to transpose images as model expected channel last images. >>> tokenizer = MarianTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-es') >>> model = FlaxCLIPVisionMarianMT.from_pretrained('flax-community/clip-vit-base-patch32_marian-es') >>> output_ids = model.generate(batch["pixel_values"], early_stopping=True, num_beams=4, max_length=64).sequences >>> output_string = tokenizer.batch_decode(output_ids.reshape(-1, 64), skip_special_tokens=True, max_length=64) >>> output_string # Sopa de avena en un tazón blanco con arándanos frescos ``` ## Training data 🏋🏻‍♂️ The Spanish image captioning model was trained on a subset of Conceptual 12M dataset by Google: <br> <br> [Conceptual 12M](https://github.com/google-research-datasets/conceptual-12m), Introduced by Changpinyo et al. in [Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts](https://arxiv.org/abs/2102.08981). ### Please update the dataset link here The translated dataset can be downloaded from [conceptual-12m-multilingual-marian-es](https://huggingface.co/datasets/flax-community/conceptual-12m-multilingual-marian-es). We do not provide images as we do not own any of them. One can download images from the `image_url` section of the original Conceptual 12M dataset. ## Data Cleaning 🧹 Though the original dataset contains 12M image-text pairs, a lot of the URLs are invalid now, and in some cases, images are corrupt or broken. We remove such examples from our data, which leaves us with approximately 10M image-text pairs, out of which we took only 2.5M image, caption pairs. #### **Train set:** Total data: <br> 2475000 captions <br> 2475000 images <br> #### **Validation set** Total data: <br> 25000 captions <br> 25000 images <br> ## Training procedure 👨🏻‍💻 ### Training The model was trained on Google Cloud Engine TPUv3-8 machine (with 335 GB of RAM, 1000 GB of hard drive, 96 CPU cores) **8 v3 TPU cores** for 42K steps with a batch size of 128 and a sequence length of 128. The optimizer used is Adam with a learning rate of 3e-4, β1 = 0.9, β2 = 0.98 and ε = 1e-8, a weight decay of 0.01, learning rate warmup for 1,000 steps and linear decay of the learning rate after. We tracked experiments using Tensorboard which can be found in `Training Metrics` tab. #### **Pretraining Results 📊** Our model reached **eval loss of ~3.1** around ~20K steps. Here are the BLEU^ scores for different languages: |Language |BLEU-1|BLEU-2|BLEU-3|BLEU-4| |--------------------------|------|------|------|------| |Spanish | 0.2015| 0.1348| 0.09982| 0.0748| ^BLEU scores are out of 1 ## **App Demo** You can try out our model on 🤗 Huggingface's spaces 🪐 : [Streamlit app of Spanish Image Captioning model on Huggingface Spaces](https://huggingface.co/spaces/flax-community/spanish-image-captioning) ## Team Members - Bhavitvya Malik [@bhavitvyamalik](https://github.com/bhavitvyamalik) - Gunjan Chhablani [@gchhablani](https://github.com/gchhablani) ## Credits Thanks to Huggingface 🤗 & Google JAX/Flax team for such a wonderful community week. Big thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) and [@patil-suraj](https://github.com/patil-suraj) for helping us with our solution during the community week. <img src=https://pbs.twimg.com/media/E443fPjX0AY1BsR.jpg:large>
{}
flax-community/clip-vit-base-patch32_marian-es
null
[ "transformers", "jax", "tensorboard", "clip-vision-marian", "arxiv:2102.08981", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# CLIP-Vision-mBART50 Seq2Seq Encoder-Decoder Model Pretrained CLIP-Vision-mBART50 pre-trained on subset of translated Conceptual-12M image-text pairs using a seq2seq model training objective. 2.5M cleaned English image-text pairs are translated using Marian Model for respective languages to 2.5M examples each in English, French, German and Spanish. We trained CLIP-Vision-mBART50 model during community week hosted by Huggingface 🤗 using JAX/Flax. ## Model description CLIP-Vision-mBART50 is a modified transformers model which takes in visual embeddings from CLIP-Vision transformer and feeds into the `encoder_hidden_states` of a mBART50 decoder. This is done for deep cross-modal interaction via `cross-attention` between the two modes. The decoder then predicts logits for the `input_ids` provided and can be used for generation. ## Intended uses & limitations❗️ You can use the raw model for encoder decoder network where you want the encoder to encode images and decoder to decode text. Note that this model is primarily aimed at being fine-tuned on tasks like multi-lingual/mono-lingual image captioning. ### How to use❓ You will need to clone the model from [here](https://github.com/gchhablani/multilingual-image-captioning). An example of usage is shown below: ```python from torchvision.io import read_image import numpy as np import os, wget from transformers import CLIPProcessor, MBart50TokenizerFast from model.flax_clip_vision_mbart.modeling_clip_vision_mbart import FlaxCLIPVisionMBartForConditionalGeneration img = wget("http://images.cocodataset.org/val2017/000000397133.jpg") img = read_image(img) # reading image clip_processor = CLIPProcessor.from_pretrained('openai/clip-vit-base-patch32') clip_outputs = clip_processor(images=img) clip_outputs['pixel_values'][0] = clip_outputs['pixel_values'][0].transpose(1,2,0) # Need to transpose images as model expected channel last images. tokenizer = MBart50TokenizerFast.from_pretrained('facebook/mbart-large-50"') model = FlaxCLIPVisionBertForMaskedLM.from_pretrained('flax-community/clip-vit-base-patch32_mbart-large-50') output_ids = model.generate(batch["pixel_values"], forced_bos_token_id=tokenizer.lang_code_to_id["es_XX"], num_beams=4, max_length=64).sequences # "es_XX is the language code in which you want the translation # en_XX: English, fr_XX: French, es_XX: Spanish, de_DE: Deutsch output_string = tokenizer.batch_decode(output_ids.reshape(-1, 64), skip_special_tokens=True, max_length=64) output_string # Un restaurante u otro lugar para comer en el Hotel ``` ## Training data 🏋🏻‍♂️ The Multi-lingual image captioning model was trained on a subset of Conceptual 12M dataset by Google: <br> <br> [Conceptual 12M](https://github.com/google-research-datasets/conceptual-12m), Introduced by Changpinyo et al. in [Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts](https://arxiv.org/abs/2102.08981). The translated dataset can be downloaded from [conceptual-12m-multilingual-marian](https://huggingface.co/datasets/flax-community/conceptual-12m-multilingual-marian). We do not provide images as we do not own any of them. One can download images from the `image_url` section of the original Conceptual 12M dataset. ## Data Cleaning 🧹 Though the original dataset contains 12M image-text pairs, a lot of the URLs are invalid now, and in some cases, images are corrupt or broken. We remove such examples from our data, which leaves us with approximately 10M image-text pairs. #### **Train set:** Total data: 10010625 captions, 2502656 images <br> Language-wise captions distribution: <br> English: 2502656<br> Spanish: 2502656<br> Deutsch: 2502656<br> French: 2502656<br> #### **Validation set** Total data: 110592 captions, 27648 images <br> Language-wise captions distribution: <br> English: 27648<br> Spanish: 27648<br> Deutsch: 27648<br> French: 27648<br> ## Training procedure 👨🏻‍💻 ### Training The model was trained on Google Cloud Engine TPUv3-8 machine (with 335 GB of RAM, 1000 GB of hard drive, 96 CPU cores) **8 v3 TPU cores** for 42K steps with a batch size of 128 and a sequence length of 128. The optimizer used is Adam with a learning rate of 3e-4, β1 = 0.9, β2 = 0.98 and ε = 1e-8, a weight decay of 0.01, learning rate warmup for 1,000 steps and linear decay of the learning rate after. We tracked experiments using Tensorboard which can be found in `Training Metrics` tab. BLEU scores for languages other than English might be wrongly tracked but the model gives good performance in other languages too as evident from the evaluation scores. #### **Pretraining Results 📊** Our model reached **eval loss of ~2.6** around ~60k steps. Here are the BLEU scores (out of 1) for different languages: |Language |BLEU-1|BLEU-2|BLEU-3|BLEU-4| |--------------------------|------|------|------|------| |English | 0.13083| 0.08887| 0.06681 | 0.04899| |Spanish | 0.15981| 0.09858| 0.06918| 0.04776| |German | 0.14234| 0.09817| 0.07405| 0.0515| |French | 0.13021| 0.08862| 0.06598| 0.04647| Model used: ckpt-51999/ In order to reproduce the results, one can use the [evaluation script](https://github.com/gchhablani/multilingual-image-captioning/blob/main/evaluation.py) available in this project's repository. ## **App Demo** You can try out our model on 🤗 Huggingface's spaces 🪐 : [Streamlit app of Multi-lingual Image Captioning model on Huggingface Spaces](https://huggingface.co/spaces/flax-community/multilingual-image-captioning) ## Team Members - Bhavitvya Malik [@bhavitvyamalik](https://github.com/bhavitvyamalik) - Gunjan Chhablani [@gchhablani](https://github.com/gchhablani) ## Credits Thanks to Huggingface 🤗 & Google JAX/FLAX team for such a wonderful community week. Big thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) and [@patil-suraj](https://github.com/patil-suraj) for helping us with our solution during the community week. <img src=https://pbs.twimg.com/media/E443fPjX0AY1BsR.jpg:large>
{}
flax-community/clip-vit-base-patch32_mbart-large-50
null
[ "transformers", "jax", "tensorboard", "clip-vision-mbart", "text2text-generation", "arxiv:2102.08981", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
flax-community/code-mt5-base-batch-mix
null
[ "transformers", "pytorch", "jax", "tensorboard", "mt5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# Tokenizer We trained our tokenizer using [sentencepiece](https://github.com/google/sentencepiece)'s unigram tokenizer. Then loaded the tokenizer as MT5TokenizerFast. ## Model We used [MT5-base](https://huggingface.co/google/mt5-base) model. ## Datasets We used [Code Search Net](https://huggingface.co/datasets/code_search_net)'s dataset and some scrapped data from internet to train the model. We maintained a list of datasets where each dataset had codes of same language. ## Plots ### Train loss ![train loss](https://i.ibb.co/x53Wm8n/train-loss.png) ### Evaluation loss ![eval loss](https://i.ibb.co/McB2jnf/eval-loss.png) ### Evaluation accuracy ![eval accuracy](https://i.ibb.co/YDGhLdn/eval-accuracy.png) ### Learning rate ![learning rate](https://i.ibb.co/CMStzWv/learning-rate.png) ## Fine tuning (WIP) We fine tuned the model with [CodeXGLUE code-to-code-trans dataset](https://huggingface.co/datasets/code_x_glue_cc_code_to_code_trans), and scrapper data.
{}
flax-community/code-mt5-base
null
[ "transformers", "pytorch", "jax", "tensorboard", "safetensors", "mt5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
flax-community/code_clippy_data
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-image
transformers
## DALL·E mini - Generate images from text <img style="text-align:center; display:block;" src="https://raw.githubusercontent.com/borisdayma/dalle-mini/main/img/logo.png" width="200"> * [Technical Report](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini--Vmlldzo4NjIxODA) * [Demo](https://huggingface.co/spaces/flax-community/dalle-mini) ### Model Description This is an attempt to replicate OpenAI's [DALL·E](https://openai.com/blog/dall-e/), a model capable of generating arbitrary images from a text prompt that describes the desired result. ![DALL·E mini demo screenshot](img/demo_screenshot.png) This model's architecture is a simplification of the original, and leverages previous open source efforts and available pre-trained models. Results have lower quality than OpenAI's, but the model can be trained and used on less demanding hardware. Our training was performed on a single TPU v3-8 for a few days. ### Components of the Architecture The system relies on the Flax/JAX infrastructure, which are ideal for TPU training. TPUs are not required, both Flax and JAX run very efficiently on GPU backends. The main components of the architecture include: * An encoder, based on [BART](https://arxiv.org/abs/1910.13461). The encoder transforms a sequence of input text tokens to a sequence of image tokens. The input tokens are extracted from the text prompt by using the model's tokenizer. The image tokens are a fixed-length sequence, and they represent indices in a VQGAN-based pre-trained codebook. * A decoder, which converts the image tokens to image pixels. As mentioned above, the decoder is based on a [VQGAN model](https://compvis.github.io/taming-transformers/). The model definition we use for the encoder can be downloaded from our [Github repo](https://github.com/borisdayma/dalle-mini). The encoder is represented by the class `CustomFlaxBartForConditionalGeneration`. To use the decoder, you need to follow the instructions in our accompanying VQGAN model in the hub, [flax-community/vqgan_f16_16384](https://huggingface.co/flax-community/vqgan_f16_16384). ### How to Use The easiest way to get familiar with the code and the models is to follow the inference notebook we provide in our [github repo](https://github.com/borisdayma/dalle-mini/blob/main/dev/inference/inference_pipeline.ipynb). For your convenience, you can open it in Google Colaboratory: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/borisdayma/dalle-mini/blob/main/dev/inference/inference_pipeline.ipynb) If you just want to test the trained model and see what it comes up with, please visit [our demo](https://huggingface.co/spaces/flax-community/dalle-mini), available in 🤗 Spaces. ### Additional Details Our [report](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini--Vmlldzo4NjIxODA) contains more details about how the model was trained and shows many examples that demonstrate its capabilities.
{"language": ["en"], "pipeline_tag": "text-to-image", "inference": false}
flax-community/dalle-mini
null
[ "transformers", "jax", "bart", "text2text-generation", "text-to-image", "en", "arxiv:1910.13461", "autotrain_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# GPT2-svenska-wikipedia A Danish GPT2 style model trained using Flax CLM pipeline on the Danish part of the wiki40b dataset. https://huggingface.co/datasets/wiki40b ## Model series This model is part of a series of models training on TPU with Flax Jax during Huggingface Flax/Jax challenge. ## Gpt models ## Swedish Gpt https://huggingface.co/birgermoell/swedish-gpt/ ## Swedish gpt wiki https://huggingface.co/flax-community/swe-gpt-wiki # Nordic gpt wiki https://huggingface.co/flax-community/nordic-gpt-wiki ## Dansk gpt wiki https://huggingface.co/flax-community/dansk-gpt-wiki ## Norsk gpt wiki https://huggingface.co/flax-community/norsk-gpt-wiki ## Roberta models ## Nordic Roberta Wiki https://huggingface.co/flax-community/nordic-roberta-wiki ## Swe Roberta Wiki Oscar https://huggingface.co/flax-community/swe-roberta-wiki-oscar ## Roberta Swedish Scandi https://huggingface.co/birgermoell/roberta-swedish-scandi ## Roberta Swedish https://huggingface.co/birgermoell/roberta-swedish ## Swedish T5 model https://huggingface.co/birgermoell/t5-base-swedish ## Data cleaning and preprocessing The data was cleaned and preprocessed using the following script. Make sure to install depencies for beam_runner to make the dataset work. ```python from datasets import load_dataset def load_and_clean_wiki(): dataset = load_dataset('wiki40b', 'da', beam_runner='DirectRunner', split="train") #dataset = load_dataset('wiki40b', 'sv', beam_runner='DirectRunner') dataset = dataset.remove_columns(['wikidata_id', 'version_id']) filtered_dataset = dataset.map(filter_wikipedia) # filtered_dataset[:3] # print(filtered_dataset[:3]) return filtered_dataset def filter_wikipedia(batch): batch["text"] = " ".join(batch["text"].split("\ _START_SECTION_\ ")) batch["text"] = " ".join(batch["text"].split("\ _START_ARTICLE_\ ")) batch["text"] = " ".join(batch["text"].split("\ _START_ARTICLE_\ ")) batch["text"] = " ".join(batch["text"].split("\ _START_PARAGRAPH_\ ")) batch["text"] = " ".join(batch["text"].split("_NEWLINE_")) batch["text"] = " ".join(batch["text"].split("\xa0")) return batch ``` ## Training script The following training script was used to train the model. ```bash ./run_clm_flax.py --output_dir="${MODEL_DIR}" --model_type="gpt2" --config_name="${MODEL_DIR}" --tokenizer_name="${MODEL_DIR}" --dataset_name="wiki40b" --dataset_config_name="da" --do_train --do_eval --block_size="512" --per_device_train_batch_size="64" --per_device_eval_batch_size="64" --learning_rate="5e-3" --warmup_steps="1000" --adam_beta1="0.9" --adam_beta2="0.98" --weight_decay="0.01" --overwrite_output_dir --num_train_epochs="20" --logging_steps="500" --save_steps="1000" --eval_steps="2500" --push_to_hub ```
{"language": "da", "widget": [{"text": "Jeg elsker livet"}]}
flax-community/dansk-gpt-wiki
null
[ "transformers", "pytorch", "jax", "tensorboard", "safetensors", "gpt2", "text-generation", "da", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
flax-community/diet-nerf-models
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
flax-community/dummy_dataset
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
flax-community/flax-model-test
null
[ "jax", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
flax-community/ft5-base-openwebtext
null
[ "transformers", "jax", "tensorboard", "f_t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
flax-community/ft5-cnn-dm
null
[ "transformers", "pytorch", "jax", "tensorboard", "f_t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# ft5 with re-zero
{}
flax-community/ft5-rezero-base-openwebtext
null
[ "transformers", "jax", "tensorboard", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
flax-community/git-byt5-base
null
[ "transformers", "jax", "tensorboard", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
flax-community/git-t5-base
null
[ "transformers", "jax", "tensorboard", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
flax-community/git-t5-v1_1-base
null
[ "transformers", "jax", "tensorboard", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
flax-community/gpt-2-code-clippy
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
# GPT-2 GERMAN ## Model description See [Open AI's model card](https://github.com/openai/gpt-2/blob/master/model_card.md) and [Huggingface's model card](https://huggingface.co/gpt2) for the original model. ## Intended uses & limitations #### How to use ```python def foo(bar) bar +=1 return bar ``` #### Limitations and bias On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ?? https://dl.acm.org/doi/10.1145/3442188.3445922 ``` @inproceedings{10.1145/3442188.3445922, author = {Bender, Emily M. and Gebru, Timnit and McMillan-Major, Angelina and Shmitchell, Shmargaret}, title = {On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ??}, year = {2021}, isbn = {9781450383097}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3442188.3445922}, doi = {10.1145/3442188.3445922}, abstract = {The past 3 years of work in NLP have been characterized by the development and deployment of ever larger language models, especially for English. BERT, its variants, GPT-2/3, and others, most recently Switch-C, have pushed the boundaries of the possible both through architectural innovations and through sheer size. Using these pretrained models and the methodology of fine-tuning them for specific tasks, researchers have extended the state of the art on a wide array of tasks as measured by leaderboards on specific benchmarks for English. In this paper, we take a step back and ask: How big is too big? What are the possible risks associated with this technology and what paths are available for mitigating those risks? We provide recommendations including weighing the environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web, carrying out pre-development exercises evaluating how the planned approach fits into research and development goals and supports stakeholder values, and encouraging research directions beyond ever larger language models.}, booktitle = {Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency}, pages = {610?623}, numpages = {14}, location = {Virtual Event, Canada}, series = {FAccT '21} } ``` ## Training data https://huggingface.co/datasets/german-nlp-group/german_common_crawl ```json {'url': 'http://my-shop.ru/shop/books/545473.html', 'date_download': '2016-10-20T19:38:58Z', 'digest': 'sha1:F62EMGYLZDIKF4UL5JZYU47KWGGUBT7T', 'length': 1155, 'nlines': 4, 'source_domain': 'my-shop.ru', 'title': 'Grammatikalische Liebeslieder. Methodische Vorschläge', 'raw_content': 'Grammatikalische Liebeslieder. [....]', 'cc_segment': 'crawl-data/CC-MAIN-2016-44/segments/1476988717783.68/wet/CC-MAIN-20161020183837-00354-ip-10-171-6-4.ec2.internal.warc.wet.gz', 'original_nlines': 99, 'original_length': 2672, 'language': 'de', 'language_score': 1.0, 'perplexity': 283.0, 'bucket': 'head'}" ``` ## Training procedure TODO (See [training](training.md)) ## Eval results TODO: Self-BLEU, Diversity, and other metrics from https://arxiv.org/abs/1904.09751 ``` @inproceedings{DBLP:conf/iclr/HoltzmanBDFC20, author = {Ari Holtzman and Jan Buys and Li Du and Maxwell Forbes and Yejin Choi}, title = {The Curious Case of Neural Text Degeneration}, booktitle = {8th International Conference on Learning Representations, {ICLR} 2020, Addis Ababa, Ethiopia, April 26-30, 2020}, publisher = {OpenReview.net}, year = {2020}, url = {https://openreview.net/forum?id=rygGQyrFvH}, timestamp = {Thu, 21 Jan 2021 17:36:46 +0100}, biburl = {https://dblp.org/rec/conf/iclr/HoltzmanBDFC20.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ### BibTeX entry and citation info Does the Huggingface hub generate DOIs? Otherwise maybe Kaggle or Zenodo to generate one. ```bibtex @inproceedings{..., year={2021} } ```
{"language": [], "tags": [], "datasets": [], "metrics": []}
flax-community/gpt-2-german
null
[ "arxiv:1904.09751", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Spanish GPT-2 GPT-2 model trained from scratch on the Spanish portion of [OSCAR](https://huggingface.co/datasets/viewer/?dataset=oscar). The model is trained with Flax and using TPUs sponsored by Google since this is part of the [Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) organised by HuggingFace. ## Model description The model used for training is [OpenAI's GPT-2](https://openai.com/blog/better-language-models/), introduced in the paper ["Language Models are Unsupervised Multitask Learners"](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) by Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever. This model is available in the 🤗 [Model Hub](https://huggingface.co/gpt2). ## Training data Spanish portion of OSCAR or **O**pen **S**uper-large **C**rawled **A**LMAnaCH co**R**pus, a huge multilingual corpus obtained by language classification and filtering of the [Common Crawl](https://commoncrawl.org/) corpus using the [goclassy](https://github.com/pjox/goclassy) architecture. This corpus is available in the 🤗 [Datasets](https://huggingface.co/datasets/oscar) library. ## Team members - Manuel Romero ([mrm8488](https://huggingface.co/mrm8488)) - María Grandury ([mariagrandury](https://huggingface.co/mariagrandury)) - Pablo González de Prado ([Pablogps](https://huggingface.co/Pablogps)) - Daniel Vera ([daveni](https://huggingface.co/daveni)) - Sri Lakshmi ([srisweet](https://huggingface.co/srisweet)) - José Posada ([jdposa](https://huggingface.co/jdposa)) - Santiago Hincapie ([shpotes](https://huggingface.co/shpotes)) - Jorge ([jorgealro](https://huggingface.co/jorgealro))
{"language": "es", "tags": ["text-generation"], "datasets": ["oscar"], "widgets": [{"text": "\u00c9rase un vez "}, {"text": "Frase: Esta pel\u00edcula es muy agradable. Sentimiento: positivo Frase: Odiaba esta pel\u00edcula, apesta. Sentimiento: negativo Frase: Esta pel\u00edcula fue bastante mala. Sentimiento: "}]}
flax-community/gpt-2-spanish
null
[ "transformers", "pytorch", "jax", "tensorboard", "safetensors", "gpt2", "text-generation", "es", "dataset:oscar", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# GPT2-Tamil This repository is created as part of the Flax/Jax community week by Huggingface. The aim of this project is to pretrain a language model using GPT-2 specifically for Tamil language. ## Setup: To setup the project, run the following command, ```python pip install -r requirements.txt ``` ## Model: Pretrained model on Tamil language using a causal language modeling (CLM) objective. ## Dataset Used: The GTP-2 model is trained on [oscar dataset - ta](https://huggingface.co/datasets/oscar) ## Intended uses & limitations: You can use the raw model for next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=gpt) to look for fine-tuned versions on a task that interests you. ## How to pretrain the model: To perform training, do the following steps, - Export the model directory (where you want to store the model artifacts like config, tokenizer, etc.) ```python >>> export MODEL_DIR=<model_dir> ``` - Create the config.json by running the following command, ```python >>> python src/create_config.py ``` - Create the tokenizer by running the following command, ```python >>> python src/train_tokenizer.py ``` - Once the config and tokenizer is created, run the following script to start training the flax model ```python >>> python scripts/train_gpt2-oscar-tamil.sh ``` ## How to use: To perform language generation using the model, pipeline can be used directly. - First convert the flax model to pytorch using the following command, ```python python src/convert_flax_to_pytorch.py ``` - Use the following snippet to perform language generation, ```python >>> from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline >>> model_name = 'abinayam/gpt-2-tamil' >>> model = AutoModelWithLMHead.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) >>> set_seed(42) >>> input_text = "ஒரு ஊரிலே ஒரு காக்கைக்கு" >>> max_len = 300 >>> no_seq = 5 >>> generator = pipeline('text-generation', model=model, tokenizer=tokenizer) >>> sequence = generator(input_text, max_length=max_len, num_return_sequences=no_seq) ```
{"language": "ta", "datasets": ["oscar", "IndicNLP"], "widget": [{"text": "\u0b92\u0bb0\u0bc1 \u0b8a\u0bb0\u0bbf\u0bb2\u0bc7 \u0b92\u0bb0\u0bc1 \u0b95\u0bbe\u0b95\u0bcd\u0b95\u0bc8\u0b95\u0bcd\u0b95\u0bc1"}]}
flax-community/gpt-2-tamil
null
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "ta", "dataset:oscar", "dataset:IndicNLP", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
flax-community/gpt-code-clippy-125M-1024-f
null
[ "transformers", "pytorch", "jax", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
flax-community/gpt-code-clippy-125M-1024-filtered
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
flax-community/gpt-code-clippy-125M-256
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
flax-community/gpt-code-clippy-125M-bs2048-raw
null
[ "transformers", "pytorch", "jax", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# GPT-Code-Clippy-1.3B-APPS-all ## Model Description GPT-Neo-1.3B-APPS-all is a GPT-Neo-1.3B fine-tuned on APPS dataset. This model is specialized to solve programming tasks. ## Training data The model is trained on the [Automated Programming Progress Standard (APPS) dataset](https://github.com/hendrycks/apps). The dataset consists of 10,000 coding problems in total, with 131,836 test cases for checking solutions and 232,444 ground-truth solutions written by humans. Problems can be complicated, as the average length of a problem is 293.2 words. The data are split evenly into training and test sets, with 5,000 problems each. This model is fine-tuned using most of the APPS dataset including both train and test split to explore the impact of this training task on model performance on other code synthesis evaluation metrics. A model fine-tuned on train set only can be found [here](https://huggingface.co/flax-community/gpt-neo-1.3B-apps). ## Training procedure The training script used to train this model can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/training/run_clm_apps.py). Training is done for 5 epochs using AdamW optimizer and leaner decay learning rate schedule with 800 warmup steps. To reproduce the training one can use this command with the above script: ``` python run_clm_apps.py \ --output_dir ./gpt-neo-125M-apps \ --model_name_or_path EleutherAI/gpt-neo-125B \ --dataset_name ./apps.py \ --dataset_config_name formatted \ --do_train --do_eval \ --block_size="1024" \ --per_device_train_batch_size="3" \ --per_device_eval_batch_size="3" \ --preprocessing_num_workers="16" \ --learning_rate="8e-5" \ --warmup_steps="800" \ --adam_beta1="0.9" \ --adam_beta2="0.98" \ --weight_decay="0.1" \ --overwrite_output_dir \ --num_train_epochs="5" \ --logging_steps="50" \ --eval_steps="2000" \ --report_to="wandb" \ --dtype="bfloat16" \ --save_strategy epoch \ --gradient_accumulation_steps 1 \ --all_data true \ ``` ## Intended Use and Limitations The model is fine-tuned to solve programming problems given a text description and optional starter code. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py from transformers import AutoModelForCausalLM, AutoTokenizer, FlaxAutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("flax-community/gpt-neo-1.3B-apps-all-2") tokenizer = AutoTokenizer.from_pretrained("flax-community/gpt-neo-1.3B-apps-all-2") prompt = """ A function to greet user. Given a user name it should say hello def greet(name): ANSWER: """ input_ids = tokenizer(prompt, return_tensors='pt').input_ids.to(device) start = input_ids.size(1) out = model.generate(input_ids, do_sample=True, max_length=50, num_beams=2, early_stopping=True, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(out[0][start:])) ``` ### Limitations and Biases The model is intended to be used for research purposes and comes with no guarantees of quality of generated code. The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374) from OpenAI has a good discussion on what the impact of a large language model trained on code could be. Therefore, some parts of their discuss are highlighted here as it pertains to this dataset and models that may be trained from it. **As well as some differences in views from the paper, particularly around legal implications**. 1. **Over-reliance:** This model may generate plausible solutions that may appear correct, but are not necessarily the correct solution. Not properly evaluating the generated code may cause have negative consequences such as the introduction of bugs, or the introduction of security vulnerabilities. Therefore, it is important that users are aware of the limitations and potential negative consequences of using this language model. 2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software. 5. **Biases:** The model is trained on data containing prompt questions formatted in specific way. The performance of the model can be worse if the prompt formatting is different from that used in APPS dataset. This model is finetuned GPT-Neo and might have inhereted biases and limitations from it. See [GPT-Neo model card](https://huggingface.co/EleutherAI/gpt-neo-125M#limitations-and-biases) for details. ## Eval results Coming soon...
{"language": ["en", "python"], "license": "mit", "tags": ["gpt_neo", "code_synthesis"], "datasets": ["apps"]}
flax-community/gpt-neo-1.3B-apps-all-2
null
[ "transformers", "pytorch", "jax", "gpt_neo", "text-generation", "code_synthesis", "dataset:apps", "arxiv:2107.03374", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# GPT-Neo-1.3B-APPS-all > **Please refer to our new [GitHub Wiki](https://github.com/ncoop57/gpt-code-clippy/wiki) which documents our efforts in detail in creating the open source version of GitHub Copilot** ## Model Description GPT-Neo-1.3B-APPS-all is a GPT-Neo-1.3B finetuned on APPS dataset. This model is specialized to solve programming tasks. ## Training data The model is trained on the [Automated Programming Progress Standard (APPS) dataset](https://github.com/hendrycks/apps). The dataset consists of 10,000 coding problems in total, with 131,836 test cases for checking solutions and 232,444 ground-truth solutions written by humans. Problems can be complicated, as the average length of a problem is 293.2 words. The data are split evenly into training and test sets, with 5,000 problems each. This model is fine-tuned using most of the APPS dataset including both train and test split to explore the impact of this training task on model performance on other code synthesis evaluation metrics. A model fine-tuned on train set only can be found [here](https://huggingface.co/flax-community/gpt-neo-1.3B-apps). ## Training procedure The training script used to train this model can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/training/run_clm_apps.py). Training is done for 5 epochs using AdamW optimizer and leaner decay learning rate schedule with 800 warmup steps. To reproduce the training one can use this command with the above script: ``` python run_clm_apps.py \ --output_dir ./gpt-neo-1.3B-apps \ --model_name_or_path EleutherAI/gpt-neo-1.3B \ --dataset_name ./apps.py \ --dataset_config_name formatted \ --do_train --do_eval \ --block_size="1024" \ --per_device_train_batch_size="3" \ --per_device_eval_batch_size="3" \ --preprocessing_num_workers="16" \ --learning_rate="8e-5" \ --warmup_steps="800" \ --adam_beta1="0.9" \ --adam_beta2="0.98" \ --weight_decay="0.1" \ --overwrite_output_dir \ --num_train_epochs="5" \ --logging_steps="50" \ --eval_steps="2000" \ --report_to="wandb" \ --dtype="bfloat16" \ --save_strategy epoch \ --gradient_accumulation_steps 1 \ --all_data true \ ``` ## Intended Use and Limitations The model is finetuned to solve programming problems given a text description and optional starter code. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py from transformers import AutoModelForCausalLM, AutoTokenizer, FlaxAutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("flax-community/gpt-code-clippy-1.3B-apps-alldata") tokenizer = AutoTokenizer.from_pretrained("flax-community/gpt-code-clippy-1.3B-apps-alldata") prompt = """ A function to greet user. Given a user name it should say hello def greet(name): ANSWER: """ input_ids = tokenizer(prompt, return_tensors='pt').input_ids.to(device) start = input_ids.size(1) out = model.generate(input_ids, do_sample=True, max_length=50, num_beams=2, early_stopping=True, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(out[0][start:])) ``` ### Limitations and Biases The model is intended to be used for research purposes and comes with no guarantees of quality of generated code. The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374) from OpenAI has a good discussion on what the impact of a large language model trained on code could be. Therefore, some parts of their discuss are highlighted here as it pertains to this dataset and models that may be trained from it. **As well as some differences in views from the paper, particularly around legal implications**. 1. **Over-reliance:** This model may generate plausible solutions that may appear correct, but are not necessarily the correct solution. Not properly evaluating the generated code may cause have negative consequences such as the introduction of bugs, or the introduction of security vulnerabilities. Therefore, it is important that users are aware of the limitations and potential negative consequences of using this language model. 2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software. 5. **Biases:** The model is trained on data containing prompt questions formatted in specific way. The performance of the model can be worse if the prompt formatting is different from that used in APPS dataset. GPT-CC is finetuned GPT-Neo and might have inhereted biases and limitations from it. See [GPT-Neo model card](https://huggingface.co/EleutherAI/gpt-neo-125M#limitations-and-biases) for details. ## Eval results Coming soon...
{"language": ["en", "python"], "license": "mit", "tags": ["gpt_neo", "code_synthesis"], "datasets": ["apps"]}
flax-community/gpt-neo-1.3B-apps-all
null
[ "transformers", "pytorch", "jax", "tensorboard", "gpt_neo", "text-generation", "code_synthesis", "dataset:apps", "arxiv:2107.03374", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# GPT-Neo-1.3B-APPS > **Please refer to our new [GitHub Wiki](https://github.com/ncoop57/gpt-code-clippy/wiki) which documents our efforts in detail in creating the open source version of GitHub Copilot** ## Model Description GPT-Neo-1.3B-APPS is a GPT-Neo-125M finetuned on APPS dataset. This model is specialized to solve programming tasks. ## Training data The model is trained on the [Automated Programming Progress Standard (APPS) dataset](https://github.com/hendrycks/apps). The dataset consists of 10,000 coding problems in total, with 131,836 test cases for checking solutions and 232,444 ground-truth solutions written by humans. Problems can be complicated, as the average length of a problem is 293.2 words. The data are split evenly into training and test sets, with 5,000 problems each. This model is fine-tuned using most of the APPS dataset including both train and test split to explore the impact of this training task on model performance on other code synthesis evaluation metrics. A model fine-tuned on train set only can be found [here](https://huggingface.co/flax-community/gpt-neo-125M-apps). ## Training procedure The training script used to train this model can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/training/run_clm_apps.py). Training is done for 5 epochs using AdamW optimizer and leaner decay learning rate schedule with 800 warmup steps. To reproduce the training one can use this command with the above script: ```bash python run_clm_apps.py \ --output_dir $HOME/gpt-neo-1.3B-apps \ --model_name_or_path EleutherAI/gpt-neo-1.3B \ --dataset_name $HOME/gpt-code-clippy/data_processing/apps.py \ --dataset_config_name formatted \ --do_train --do_eval \ --block_size="1024" \ --per_device_train_batch_size="3" \ --per_device_eval_batch_size="3" \ --preprocessing_num_workers="16" \ --learning_rate="8e-5" \ --warmup_steps="800" \ --adam_beta1="0.9" \ --adam_beta2="0.98" \ --weight_decay="0.1" \ --overwrite_output_dir \ --num_train_epochs="5" \ --logging_steps="50" \ --eval_steps="2000" \ --report_to="wandb" \ --dtype="bfloat16" \ --save_strategy epoch \ --gradient_accumulation_steps 1 \ ``` ## Intended Use and Limitations The model is finetuned to solve programming problems given a text description and optional starter code. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py from transformers import AutoModelForCausalLM, AutoTokenizer, FlaxAutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("flax-community/gpt-code-clippy-1.3B-apps") tokenizer = AutoTokenizer.from_pretrained("flax-community/gpt-code-clippy-1.3B-apps") prompt = """ A function to greet user. Given a user name it should say hello def greet(name): ANSWER: """ input_ids = tokenizer(prompt, return_tensors='pt').input_ids.to(device) start = input_ids.size(1) out = model.generate(input_ids, do_sample=True, max_length=50, num_beams=2, early_stopping=True, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(out[0][start:])) ``` ### Limitations and Biases The model is intended to be used for research purposes and comes with no guarantees of quality of generated code. The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374) from OpenAI has a good discussion on what the impact of a large language model trained on code could be. Therefore, some parts of their discuss are highlighted here as it pertains to this dataset and models that may be trained from it. **As well as some differences in views from the paper, particularly around legal implications**. 1. **Over-reliance:** This model may generate plausible solutions that may appear correct, but are not necessarily the correct solution. Not properly evaluating the generated code may cause have negative consequences such as the introduction of bugs, or the introduction of security vulnerabilities. Therefore, it is important that users are aware of the limitations and potential negative consequences of using this language model. 2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software. 5. **Biases:** The model is trained on data containing prompt questions formatted in specific way. The performance of the model can be worse if the prompt formatting is different from that used in APPS dataset. GPT-CC is finetuned GPT-Neo and might have inhereted biases and limitations from it. See [GPT-Neo model card](https://huggingface.co/EleutherAI/gpt-neo-125M#limitations-and-biases) for details. ## Eval results Coming soon...
{"language": ["en", "python"], "license": "mit", "tags": ["gpt_neo", "code_synthesis"], "datasets": ["apps"]}
flax-community/gpt-neo-1.3B-apps
null
[ "transformers", "pytorch", "jax", "gpt_neo", "text-generation", "code_synthesis", "dataset:apps", "arxiv:2107.03374", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
flax-community/gpt-neo-1.3B-code-clippy-test-1
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
flax-community/gpt-neo-1.3B-code-clippy
null
[ "transformers", "pytorch", "jax", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
flax-community/gpt-neo-1.3B-persian
null
[ "transformers", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
flax-community/gpt-neo-1.3B-resized-embed
null
[ "transformers", "pytorch", "jax", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# GPT-Neo-125M-APPS-all > **Please refer to our new [GitHub Wiki](https://github.com/ncoop57/gpt-code-clippy/wiki) which documents our efforts in detail in creating the open source version of GitHub Copilot** ## Model Description GPT-Neo-125M-APPS-all is a GPT-Neo-125M finetuned on APPS dataset. This model is specialized to solve programming tasks. ## Training data The model is trained on the [Automated Programming Progress Standard (APPS) dataset](https://github.com/hendrycks/apps). The dataset consists of 10,000 coding problems in total, with 131,836 test cases for checking solutions and 232,444 ground-truth solutions written by humans. Problems can be complicated, as the average length of a problem is 293.2 words. The data are split evenly into training and test sets, with 5,000 problems each. This model is fine-tuned using most of the APPS dataset including both train and test split to explore the impact of this training task on model performance on other code synthesis evaluation metrics. A model fine-tuned on train set only can be found [here](https://huggingface.co/flax-community/gpt-neo-125M-apps). ## Training procedure The training script used to train this model can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/training/run_clm_apps.py). Training is done for 5 epochs using AdamW optimizer and leaner decay learning rate schedule with 800 warmup steps. To reproduce the training one can use this command with the above script: ```bash python run_clm_apps.py \ --output_dir $HOME/gpt-neo-125M-apps \ --model_name_or_path EleutherAI/gpt-neo-125B \ --dataset_name $HOME/gpt-code-clippy/data_processing/apps.py \ --dataset_config_name formatted \ --do_train --do_eval \ --block_size="1024" \ --per_device_train_batch_size="16" \ --per_device_eval_batch_size="16" \ --preprocessing_num_workers="16" \ --learning_rate="8e-5" \ --warmup_steps="800" \ --adam_beta1="0.9" \ --adam_beta2="0.98" \ --weight_decay="0.1" \ --overwrite_output_dir \ --num_train_epochs="5" \ --logging_steps="50" \ --eval_steps="2000" \ --report_to="wandb" \ --dtype="bfloat16" \ --save_strategy epoch \ --gradient_accumulation_steps 2 \ --all_data true \ ``` ## Intended Use and Limitations The model is finetuned to solve programming problems given a text description and optional starter code. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py from transformers import AutoModelForCausalLM, AutoTokenizer, FlaxAutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("flax-community/gpt-code-clippy-125M-apps-alldata") tokenizer = AutoTokenizer.from_pretrained("flax-community/gpt-code-clippy-125M-apps-alldata") prompt = """ A function to greet user. Given a user name it should say hello def greet(name): ANSWER: """ input_ids = tokenizer(prompt, return_tensors='pt').input_ids.to(device) start = input_ids.size(1) out = model.generate(input_ids, do_sample=True, max_length=50, num_beams=2, early_stopping=True, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(out[0][start:])) ``` ### Limitations and Biases The model is intended to be used for research purposes and comes with no guarantees of quality of generated code. The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374) from OpenAI has a good discussion on what the impact of a large language model trained on code could be. Therefore, some parts of their discuss are highlighted here as it pertains to this dataset and models that may be trained from it. **As well as some differences in views from the paper, particularly around legal implications**. 1. **Over-reliance:** This model may generate plausible solutions that may appear correct, but are not necessarily the correct solution. Not properly evaluating the generated code may cause have negative consequences such as the introduction of bugs, or the introduction of security vulnerabilities. Therefore, it is important that users are aware of the limitations and potential negative consequences of using this language model. 2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software. 5. **Biases:** The model is trained on data containing prompt questions formatted in specific way. The performance of the model can be worse if the prompt formatting is different from that used in APPS dataset. GPT-CC is finetuned GPT-Neo and might have inhereted biases and limitations from it. See [GPT-Neo model card](https://huggingface.co/EleutherAI/gpt-neo-125M#limitations-and-biases) for details. ## Eval results Coming soon...
{"language": ["en", "python"], "license": "mit", "tags": ["gpt_neo", "code_synthesis"], "datasets": ["apps"]}
flax-community/gpt-neo-125M-apps-all
null
[ "transformers", "pytorch", "jax", "gpt_neo", "text-generation", "code_synthesis", "dataset:apps", "arxiv:2107.03374", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# GPT-Neo-125M-APPS > **Please refer to our new [GitHub Wiki](https://github.com/ncoop57/gpt-code-clippy/wiki) which documents our efforts in detail in creating the open source version of GitHub Copilot** ## Model Description GPT-Neo-125M-APPS is a GPT-Neo-125M finetuned on APPS dataset. This model is specialized to solve programming tasks. ## Training data The model is trained on the [Automated Programming Progress Standard (APPS) dataset](https://github.com/hendrycks/apps). The dataset consists of 10,000 coding problems in total, with 131,836 test cases for checking solutions and 232,444 ground-truth solutions written by humans. Problems can be complicated, as the average length of a problem is 293.2 words. The data are split evenly into training and test sets, with 5,000 problems each. ## Training procedure The training script used to train this model can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/training/run_clm_apps.py). Training is done for 5 epochs using AdamW optimizer and leaner decay learning rate schedule with 800 warmup steps. To reproduce the training one can use this command with the above script: ```bash python run_clm_apps.py \ --output_dir $HOME/gpt-neo-125M-apps \ --model_name_or_path EleutherAI/gpt-neo-125M \ --dataset_name $HOME/gpt-code-clippy/data_processing/apps.py \ --dataset_config_name formatted \ --do_train --do_eval \ --block_size="1024" \ --per_device_train_batch_size="16" \ --per_device_eval_batch_size="16" \ --preprocessing_num_workers="16" \ --learning_rate="8e-5" \ --warmup_steps="800" \ --adam_beta1="0.9" \ --adam_beta2="0.98" \ --weight_decay="0.1" \ --overwrite_output_dir \ --num_train_epochs="5" \ --logging_steps="50" \ --eval_steps="2000" \ --report_to="wandb" \ --dtype="bfloat16" \ --save_strategy epoch \ --gradient_accumulation_steps 2 \ ``` ## Intended Use and Limitations The model is finetuned to solve programming problems given a text description and optional starter code. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py from transformers import AutoModelForCausalLM, AutoTokenizer, FlaxAutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("flax-community/gpt-neo-125M-apps") tokenizer = AutoTokenizer.from_pretrained("flax-community/gpt-neo-125M-apps") prompt = """ A function to greet user. Given a user name it should say hello def greet(name): ANSWER: """ input_ids = tokenizer(prompt, return_tensors='pt').input_ids.to(device) start = input_ids.size(1) out = model.generate(input_ids, do_sample=True, max_length=50, num_beams=2, early_stopping=True, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(out[0][start:])) ``` ### Limitations and Biases The model is intended to be used for research purposes and comes with no guarantees of quality of generated code. The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374) from OpenAI has a good discussion on what the impact of a large language model trained on code could be. Therefore, some parts of their discuss are highlighted here as it pertains to this dataset and models that may be trained from it. **As well as some differences in views from the paper, particularly around legal implications**. 1. **Over-reliance:** This model may generate plausible solutions that may appear correct, but are not necessarily the correct solution. Not properly evaluating the generated code may cause have negative consequences such as the introduction of bugs, or the introduction of security vulnerabilities. Therefore, it is important that users are aware of the limitations and potential negative consequences of using this language model. 2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software. 5. **Biases:** The model is trained on data containing prompt questions formatted in specific way. The performance of the model can be worse if the prompt formatting is different from that used in APPS dataset. GPT-CC is finetuned GPT-Neo and might have inhereted biases and limitations from it. See [GPT-Neo model card](https://huggingface.co/EleutherAI/gpt-neo-125M#limitations-and-biases) for details. ## Eval results Coming soon...
{"language": ["en", "code"], "license": "mit", "tags": ["gpt_neo", "code_synthesis"], "datasets": ["apps"], "language_details": "python code"}
flax-community/gpt-neo-125M-apps
null
[ "transformers", "pytorch", "jax", "gpt_neo", "text-generation", "code_synthesis", "en", "code", "dataset:apps", "arxiv:2107.03374", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Model Card for gpt-neo-125M-code-clippy-dedup-2048 # Model Details ## Model Description More information needed - **Developed by:** Flax Community - **Shared by [Optional]:** Hugging Face - **Model type:** Text Generation - **Language(s) (NLP):** More information needed - **License:** More information needed - **Related Models:** - **Parent Model:** GPT-Neo - **Resources for more information:** - [GitHub Repo](https://github.com/CodedotAl/gpt-code-clippy) # Uses ## Direct Use This model can be used for the task of Text Generation ## Downstream Use [Optional] More information needed ## Out-of-Scope Use The model should not be used to intentionally create hostile or alienating environments for people. # Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Recommendations The model creators note in the GitHub Repo](https://github.com/CodedotAl/gpt-code-clippy): > **ISSUE : Wrong Filenames in the Dataset** We recently came to know about a bug which happened during the scraping of the dataset. We found out that the file names are obsolete/misleading.[Refer this [issue](https://github.com/CodedotAl/gpt-code-clippy/issues/71)] We thank Naman for pointing out the issue. This might have two implications - Since the filtering for the training dataset is done using the file extension, we might have had wrong datapoints in the dataset while training and we might have missed a lot of right datapoints that belong to the languages of choice. # Training Details ## Training Data The model creators note in the GitHub Repo](https://github.com/CodedotAl/gpt-code-clippy): > For fine-tuning GPTNeo-125M on CodeClippy dataset we used AdamW optimizer (beta1=0.9, beta2=0.95) with GPT3-like learning rate schedule (4k warmup steps from 0 to 5e-5 followed by 50k cosine decay steps to 5e-6), weight decay 0.1 and batch size 1024, sequence length 2048. ## Training Procedure ### Preprocessing More information needed ### Speeds, Sizes, Times The model creators note in the GitHub Repo](https://github.com/CodedotAl/gpt-code-clippy): > For fine-tuning GPTNeo-125M on CodeClippy dataset we used AdamW optimizer (beta1=0.9, beta2=0.95) with GPT3-like learning rate schedule (4k warmup steps from 0 to 5e-5 followed by 50k cosine decay steps to 5e-6), weight decay 0.1 and batch size 1024, sequence length 2048. The choice of relatively large batch size and low LR with long warmup are made to avoid agressive updates and preserve the knowledge contained in pretrained GPTNeo weights. # Evaluation ## Testing Data, Factors & Metrics ### Testing Data The model creators note in the GitHub Repo](https://github.com/CodedotAl/gpt-code-clippy): > The models are also evaluated on the [APPS](https://github.com/hendrycks/apps) and [HumanEval](https://github.com/openai/human-eval) datasets. ### Factors More information needed ### Metrics More information needed ## Results | Model | pass@1 | pass@2 | pass@5 | pass@10 | | --------------------------------- | :---------: | :---------: | :---------: | :---------: | | gpt-neo-125M-apps | 0.06% | 0.12% | 0.30% | 0.61% | # Model Examination More information needed # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** More information needed - **Hours used:** More information needed - **Cloud Provider:** More information needed - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Technical Specifications [optional] ## Model Architecture and Objective GPTNeoForCausalLM ## Compute Infrastructure More information needed ### Hardware More information needed ### Software More information needed # Citation **BibTeX:** More information needed **APA:** More information needed # Glossary [optional] More information needed # More Information [optional] More information needed # Model Card Authors [optional] Flax Community in collaboration with Ezi Ozoani and the Hugging Face team # Model Card Contact More information needed # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("flax-community/gpt-neo-125M-code-clippy-dedup-2048") model = AutoModelForCausalLM.from_pretrained("flax-community/gpt-neo-125M-code-clippy-dedup-2048") ``` </details>
{"tags": ["flax"]}
flax-community/gpt-neo-125M-code-clippy-dedup-2048
null
[ "transformers", "pytorch", "jax", "tensorboard", "gpt_neo", "text-generation", "flax", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# GPT-Code-Clippy-125M-from-Scratch > **Please refer to our new [GitHub Wiki](https://github.com/ncoop57/gpt-code-clippy/wiki) which documents our efforts in detail in creating the open source version of GitHub Copilot** ## Model Description GPT-CC-125M-from-Scratch is a [GPT-Neo-125M model](https://huggingface.co/EleutherAI/gpt-neo-125M) pretrained from scratch using causal language modeling on the [Code Clippy Post-deduplication dataset](https://the-eye.eu/public/AI/training_data/code_clippy_data/code_clippy_dedup_data/). The deduplication script can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/data_processing/deduplication/deduplication.py). Code Clippy was scraped from public Github repositories (more information in the provided link). This model is specialized to autocomplete methods in multiple programming languages. As discussed in OpenAI's [Codex paper](https://arxiv.org/abs/2107.03374), we modified the GPT-Neo model and tokenizer to accommodate for additional whitespace characters. Specifically, we add the following tokens `["\t\t", " ", " ", " "]` and since they are all related to indentation, we initialize the embedding layer of these tokens with the same weights as the `\t` token already present in the model in hopes the model will learn to associate these whitespace characters with indentation faster. A script to automatically do this can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/training/utilities/add_new_tokens.py). ## Training data [Code Clippy Deduplicated dataset](https://the-eye.eu/public/AI/training_data/code_clippy_data/code_clippy_dedup_data/). Python script of the dataset can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/data_processing/code_clippy.py) ## Training procedure The training script used to train this model can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/training/deprecated/run_clm_streaming_flax_v2.py). ```bash ./run_clm_streaming_flax_v2.py \ --output_dir $HOME/gpt-neo-125M-code-clippy-from-scratch \ --tokenizer_name="EleutherAI/gpt-neo-125M" \ --model_name_or_path="EleutherAI/gpt-neo-125M" \ --dataset_name $HOME/gpt-code-clippy/data_processing/code_clippy.py \ --data_dir /home/shared/code_clippy_data \ --do_train --do_eval \ --block_size="2048" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="16" \ --preprocessing_num_workers="8" \ --learning_rate="3e-5" \ --max_steps 100000 \ --warmup_steps 2500\ --decap_steps 25000 \ --adam_beta1="0.9" \ --adam_beta2="0.95" \ --weight_decay="0.1" \ --overwrite_output_dir \ --logging_steps="50" \ --eval_steps="500" \ --push_to_hub="False" \ --report_to="all" \ --dtype="bfloat16" \ --skip_memory_metrics="True" \ --save_steps="500" \ --save_total_limit 10 \ --report_to="wandb" \ --run_name="gpt-neo-125M-code-clippy-dedup-scratch" ``` ## Intended Use and Limitations The model is pre-trained and not finetuned for any particular use or a particular programming language. Due to time constraints, this model was only pre-trained on 1% of the Code Clippy Dataset. The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374) from OpenAI has a good discussion on what the impact of a large language model trained on code could be. Therefore, some parts of their discuss are highlighted here as it pertains to this dataset and models that may be trained from it. **As well as some differences in views from the paper, particularly around legal implications**. 1. **Over-reliance:** This model may generate plausible solutions that may appear correct, but are not necessarily the correct solution. Not properly evaluating the generated code may cause have negative consequences such as the introduction of bugs, or the introduction of security vulnerabilities. Therefore, it is important that users are aware of the limitations and potential negative consequences of using this language model. 2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software. 3. **Security implications:** No filtering or checking of vulnerabilities or buggy code was performed on the datase this model is trained on. This means that the dataset may contain code that may be malicious or contain vulnerabilities. Therefore, this model may generate vulnerable, buggy, or malicious code. In safety critical software, this could lead to software that may work improperly and could result in serious consequences depending on the software. Additionally, this model may be able to be used to generate malicious code on purpose in order to perform ransomware or other such attacks. 4. **Legal implications:** No filtering was performed on licensed code. This means that the dataset may contain restrictive licensed code. As discussed in the paper, public Github repositories may fall under "fair use." However, there has been little to no previous cases of such usages of licensed publicly available code. Therefore, any code generated with this model may be required to obey license terms that align with the software it was trained on such as GPL-3.0. It is unclear the legal ramifications of using a language model trained on this dataset. 5. **Biases:** The programming languages most represented in the dataset this model was trained on are Javascript and Python. Therefore, other, still popular languages such as C and C++, are less represented and therefore the models performance for these languages will be less comparatively. Additionally, this dataset only contains public repositories and so the model may not generate code that is representative of code written by private developers. No filtering was performed for potential racist, offensive, or otherwise inappropriate content. Therefore, this model may reflect such biases in its generation. GPT-Neo-125M-Code-Clippy is finetuned from GPT-Neo and might have inherited biases and limitations from it. See [GPT-Neo model card](https://huggingface.co/EleutherAI/gpt-neo-125M#limitations-and-biases) for details. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py from transformers import AutoModelForCausalLM, AutoTokenizer, FlaxAutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("flax-community/gpt-neo-125M-code-clippy-dedup-scratch") tokenizer = AutoTokenizer.from_pretrained("flax-community/gpt-neo-125M-code-clippy-dedup-scratch") prompt = """def greet(name): '''A function to greet user. Given a user name it should say hello''' """ input_ids = tokenizer(prompt, return_tensors='pt').input_ids.to(device) start = input_ids.size(1) out = model.generate(input_ids, do_sample=True, max_length=50, num_beams=2, early_stopping=True, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(out[0][start:])) ``` ### Limitations and Biases The model is intended to be used for research purposes and comes with no guarantees of the quality of generated code. GPT-CC is finetuned from GPT-Neo and might have inherited biases and limitations from it. See [GPT-Neo model card](https://huggingface.co/EleutherAI/gpt-neo-125M#limitations-and-biases) for details. ## Evaluation results Below is a table containing the base model we started from, and the model's performance on the [HumanEval Benchmark](https://github.com/openai/human-eval). | Model | Dataset Used | pass@1 | pass@2 | pass@5 | pass@10 | | --- | --- | :---------: | :---------: | :---------: | :---------: | | [gpt-neo-125M (**trained from scratch**)](https://huggingface.co/flax-community/gpt-neo-125M-code-clippy-dedup-scratch) | [Code Clippy Data (Deduplicated)](https://the-eye.eu/public/AI/training_data/code_clippy_data/code_clippy_dedup_data/) (~1% of the data) | 0.00% | 0.00% | 0.00% | 0.00% |
{}
flax-community/gpt-neo-125M-code-clippy-dedup-scratch
null
[ "transformers", "jax", "tensorboard", "gpt_neo", "text-generation", "arxiv:2107.03374", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# GPT-Neo-125M-Code-Clippy-Dedup > **Please refer to our new [GitHub Wiki](https://github.com/ncoop57/gpt-code-clippy/wiki) which documents our efforts in detail in creating the open source version of GitHub Copilot** ## Model Description PT-Neo-125M-Code-Clippy-Dedup is a [GPT-Neo-125M model](https://huggingface.co/EleutherAI/gpt-neo-125M) finetuned using causal language modeling on our deduplicated version of the Code Clippy Data dataset, which was scraped from public Github repositories (more information in the provided link). This model is specialized to autocomplete methods in multiple programming languages. ## Training data [Code Clippy Data dataset](https://huggingface.co/datasets/code_search_net). ## Training procedure In this model's training we tried to stabilize the training by limiting the types of files we were using to train to only those that contained file extensions for popular programming languages as our dataset contains other types of files as well such as `.txt` or project configuration files. We used the following extensions to filter by: The training script used to train this model can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/training/run_clm_streaming_filter_flax.py). ```bash ./run_clm_streaming_filter_flax.py \ --output_dir $HOME/gpt-neo-125M-code-clippy-dedup \ --model_name_or_path="EleutherAI/gpt-neo-125M" \ --dataset_name $HOME/gpt-code-clippy/data_processing/code_clippy_filter.py \ --data_dir $HOME/code_clippy_data/code_clippy_dedup_data \ --text_column_name="text" \ --do_train --do_eval \ --block_size="2048" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="16" \ --preprocessing_num_workers="8" \ --learning_rate="1e-4" \ --max_steps 100000 \ --warmup_steps 2000 \ --decay_steps 30000 \ --adam_beta1="0.9" \ --adam_beta2="0.95" \ --weight_decay="0.1" \ --overwrite_output_dir \ --logging_steps="25" \ --eval_steps="500" \ --push_to_hub="False" \ --report_to="all" \ --dtype="bfloat16" \ --skip_memory_metrics="True" \ --save_steps="500" \ --save_total_limit 10 \ --gradient_accumulation_steps 16 \ --report_to="wandb" \ --run_name="gpt-neo-125M-code-clippy-dedup-filtered-no-resize-2048bs" \ --max_eval_samples 2000 \ --save_optimizer true ``` ## Intended Use and Limitations The model is finetuned text file from github repositories (mostly programming languages but also markdown and other project related files). ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py from transformers import AutoModelForCausalLM, AutoTokenizer, FlaxAutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("flax-community/gpt-neo-125M-code-clippy-dedup") tokenizer = AutoTokenizer.from_pretrained("flax-community/gpt-neo-125M-code-clippy-dedup") prompt = """def greet(name): '''A function to greet user. Given a user name it should say hello''' """ input_ids = tokenizer(prompt, return_tensors='pt').input_ids.to(device) start = input_ids.size(1) out = model.generate(input_ids, do_sample=True, max_length=50, num_beams=2, early_stopping=True, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(out[0][start:])) ``` ### Limitations and Biases The model is intended to be used for research purposes and comes with no guarantees of quality of generated code. The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374) from OpenAI has a good discussion on what the impact of a large language model trained on code could be. Therefore, some parts of their discuss are highlighted here as it pertains to this dataset and models that may be trained from it. **As well as some differences in views from the paper, particularly around legal implications**. 1. **Over-reliance:** This model may generate plausible solutions that may appear correct, but are not necessarily the correct solution. Not properly evaluating the generated code may cause have negative consequences such as the introduction of bugs, or the introduction of security vulnerabilities. Therefore, it is important that users are aware of the limitations and potential negative consequences of using this language model. 2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software. 3. **Security implications:** No filtering or checking of vulnerabilities or buggy code was performed on the datase this model is trained on. This means that the dataset may contain code that may be malicious or contain vulnerabilities. Therefore, this model may generate vulnerable, buggy, or malicious code. In safety critical software, this could lead to software that may work improperly and could result in serious consequences depending on the software. Additionally, this model may be able to be used to generate malicious code on purpose in order to perform ransomware or other such attacks. 4. **Legal implications:** No filtering was performed on licensed code. This means that the dataset may contain restrictive licensed code. As discussed in the paper, public Github repositories may fall under "fair use." However, there has been little to no previous cases of such usages of licensed publicly available code. Therefore, any code generated with this model may be required to obey license terms that align with the software it was trained on such as GPL-3.0. It is unclear the legal ramifications of using a language model trained on this dataset. 5. **Biases:** The programming languages most represented in the dataset this model was trained on are Javascript and Python. Therefore, other, still popular languages such as C and C++, are less represented and therefore the models performance for these languages will be less comparatively. Additionally, this dataset only contains public repositories and so the model may not generate code that is representative of code written by private developers. No filtering was performed for potential racist, offensive, or otherwise inappropriate content. Therefore, this model may reflect such biases in its generation. GPT-Neo-125M-Code-Clippy-Dedup is finetuned from GPT-Neo and might have inherited biases and limitations from it. See [GPT-Neo model card](https://huggingface.co/EleutherAI/gpt-neo-125M#limitations-and-biases) for details. ## Eval results Coming soon...
{}
flax-community/gpt-neo-125M-code-clippy-dedup
null
[ "transformers", "pytorch", "jax", "tensorboard", "gpt_neo", "text-generation", "arxiv:2107.03374", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
flax-community/gpt-neo-125M-code-clippy-test-1
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
flax-community/gpt-neo-125M-code-clippy-test
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# GPT-Neo-125M-Code-Clippy > **Please refer to our new [GitHub Wiki](https://github.com/ncoop57/gpt-code-clippy/wiki) which documents our efforts in detail in creating the open source version of GitHub Copilot** ## Model Description GPT-Neo-125M-Code-Clippy is a [GPT-Neo-125M model](https://huggingface.co/EleutherAI/gpt-neo-125M) finetuned using causal language modeling on our version of the Code Clippy Data dataset that has duplicates, which was scraped from public Github repositories (more information in the provided link). This model is specialized to autocomplete methods in multiple programming languages. As discussed in OpenAI's [Codex paper](https://arxiv.org/abs/2107.03374), we modified the GPT-Neo model and tokenizer to accommodate for additional whitespace characters. Specifically, we add the following tokens `["\t\t", " ", " ", " "]` and since they are all related to indentation, we initialize the embedding layer of these tokens with the same weights as the `\t` token already present in the model in hopes the model will learn to associate these whitespace characters with indentation faster. A script to automatically do this can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/training/utilities/add_new_tokens.py). ## Training data [Code Clippy Data dataset](https://the-eye.eu/public/AI/training_data/code_clippy_data/code_clippy_dedup_data/). ## Training procedure The training script used to train this model can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/training/run_clm_streaming_flax.py). To reproduce the training one can use this command with the above script: ```bash ./run_clm_streaming_flax.py \ --output_dir $HOME/gpt-neo-125M-code-clippy \ --model_name_or_path="flax-community/gpt-neo-125M-code-clippy" \ --dataset_name $HOME/gpt-code-clippy/data_processing/code_clippy.py \ --data_dir /home/shared/code_clippy_data \ --text_column_name="text" \ --do_train --do_eval \ --block_size="2048" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="16" \ --preprocessing_num_workers="8" \ --learning_rate="1e-4" \ --max_steps 100000 \ --warmup_steps 2500 \ --decay_steps 25000 \ --adam_beta1="0.9" \ --adam_beta2="0.95" \ --weight_decay="0.1" \ --overwrite_output_dir \ --logging_steps="100" \ --eval_steps="500" \ --push_to_hub="False" \ --report_to="all" \ --dtype="bfloat16" \ --skip_memory_metrics="True" \ --save_steps="500" \ --save_total_limit 10 \ --gradient_accumulation_steps 16 \ --report_to="wandb" \ --run_name="125m_1e-4lr_1024bs" \ --max_eval_samples 2000 \ --save_optimizer true ``` ## Intended Use and Limitations The model is finetuned on text files from github repositories (mostly programming languages but also markdown and other project related files). ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py from transformers import AutoModelForCausalLM, AutoTokenizer, FlaxAutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("flax-community/gpt-neo-125M-code-clippy") tokenizer = AutoTokenizer.from_pretrained("flax-community/gpt-neo-125M-code-clippy") prompt = """def greet(name): '''A function to greet user. Given a user name it should say hello''' """ input_ids = tokenizer(prompt, return_tensors='pt').input_ids.to(device) start = input_ids.size(1) out = model.generate(input_ids, do_sample=True, max_length=50, num_beams=2, early_stopping=True, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(out[0][start:])) ``` ### Limitations and Biases The model is intended to be used for research purposes and comes with no guarantees of quality of generated code. The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374) from OpenAI has a good discussion on what the impact of a large language model trained on code could be. Therefore, some parts of their discuss are highlighted here as it pertains to this dataset and models that may be trained from it. **As well as some differences in views from the paper, particularly around legal implications**. 1. **Over-reliance:** This model may generate plausible solutions that may appear correct, but are not necessarily the correct solution. Not properly evaluating the generated code may cause have negative consequences such as the introduction of bugs, or the introduction of security vulnerabilities. Therefore, it is important that users are aware of the limitations and potential negative consequences of using this language model. 2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software. 3. **Security implications:** No filtering or checking of vulnerabilities or buggy code was performed on the datase this model is trained on. This means that the dataset may contain code that may be malicious or contain vulnerabilities. Therefore, this model may generate vulnerable, buggy, or malicious code. In safety critical software, this could lead to software that may work improperly and could result in serious consequences depending on the software. Additionally, this model may be able to be used to generate malicious code on purpose in order to perform ransomware or other such attacks. 4. **Legal implications:** No filtering was performed on licensed code. This means that the dataset may contain restrictive licensed code. As discussed in the paper, public Github repositories may fall under "fair use." However, there has been little to no previous cases of such usages of licensed publicly available code. Therefore, any code generated with this model may be required to obey license terms that align with the software it was trained on such as GPL-3.0. It is unclear the legal ramifications of using a language model trained on this dataset. 5. **Biases:** The programming languages most represented in the dataset this model was trained on are Javascript and Python. Therefore, other, still popular languages such as C and C++, are less represented and therefore the models performance for these languages will be less comparatively. Additionally, this dataset only contains public repositories and so the model may not generate code that is representative of code written by private developers. No filtering was performed for potential racist, offensive, or otherwise inappropriate content. Therefore, this model may reflect such biases in its generation. GPT-Neo-125M-Code-Clippy is finetuned from GPT-Neo and might have inherited biases and limitations from it. See [GPT-Neo model card](https://huggingface.co/EleutherAI/gpt-neo-125M#limitations-and-biases) for details. ## Eval results Coming soon...
{}
flax-community/gpt-neo-125M-code-clippy
null
[ "transformers", "pytorch", "jax", "tensorboard", "gpt_neo", "text-generation", "arxiv:2107.03374", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00