Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
text-classification
transformers
# Hungarian Sentence-level Sentiment Analysis Model with XLM-RoBERTa For further models, scripts and details, see [our repository](https://github.com/nytud/sentiment-analysis) or [our demo site](https://juniper.nytud.hu/demo/nlp). - Pretrained model used: XLM-RoBERTa base - Finetuned on Hungarian Twitter Sentiment (HTS) Corpus - Labels: 0 (negative), 1 (positive) ## Limitations - max_seq_length = 128 ## Results | Model | HTS2 | HTS5 | | ------------- | ------------- | ------------- | | huBERT | 85.56 | 68.99 | | XLM-RoBERTa| **85.56** | 66.50 | ## Citation If you use this model, please cite the following paper: ``` @inproceedings {laki-yang-sentiment, title = {Improving Performance of Sentence-level Sentiment Analysis with Data Augmentation Methods}, booktitle = {Proceedings of 12th IEEE International Conference on Cognitive Infocommunications (CogInfoCom 2021)}, year = {2021}, publisher = {IEEE}, address = {Online}, author = {Laki, László and Yang, Zijian Győző} pages = {417--422} } ```
{"language": ["hu"], "license": "mit", "tags": ["text-classification"], "metrics": ["accuracy"], "widget": [{"text": "J\u00f3 reggelt! majd k\u00fcld\u00f6m az \u00e9lm\u00e9nyhoz\u00f3kat :)."}]}
NYTK/sentiment-hts2-xlm-roberta-hungarian
null
[ "transformers", "pytorch", "roberta", "text-classification", "hu", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# Hungarian Sentence-level Sentiment Analysis with Finetuned huBERT Model For further models, scripts and details, see [our repository](https://github.com/nytud/sentiment-analysis) or [our demo site](https://juniper.nytud.hu/demo/nlp). - Pretrained model used: huBERT - Finetuned on Hungarian Twitter Sentiment (HTS) Corpus - Labels: 0 (very negative), 1 (negative), 2 (neutral), 3 (positive), 4 (very positive) ## Limitations - max_seq_length = 128 ## Results | Model | HTS2 | HTS5 | | ------------- | ------------- | ------------- | | huBERT | 85.56 | **68.99** | | XLM-RoBERTa| 85.56 | 66.50 | ## Citation If you use this model, please cite the following paper: ``` @inproceedings {yang-sentiment, title = {Improving Performance of Sentence-level Sentiment Analysis with Data Augmentation Methods}, booktitle = {Proceedings of 12th IEEE International Conference on Cognitive Infocommunications (CogInfoCom 2021)}, year = {2021}, publisher = {IEEE}, address = {Online}, author = {Laki, László and Yang, Zijian Győző} pages = {417--422} } ```
{"language": ["hu"], "license": "apache-2.0", "tags": ["text-classification"], "metrics": ["accuracy"], "widget": [{"text": "J\u00f3 reggelt! majd k\u00fcld\u00f6m az \u00e9lm\u00e9nyhoz\u00f3kat :)."}]}
NYTK/sentiment-hts5-hubert-hungarian
null
[ "transformers", "pytorch", "bert", "text-classification", "hu", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# Hungarian Sentence-level Sentiment Analysis Model with XLM-RoBERTa For further models, scripts and details, see [our repository](https://github.com/nytud/sentiment-analysis) or [our demo site](https://juniper.nytud.hu/demo/nlp). - Pretrained model used: XLM-RoBERTa base - Finetuned on Hungarian Twitter Sentiment (HTS) Corpus - Labels: 0 (very negative), 1 (negative), 2 (neutral), 3 (positive), 4 (very positive) ## Limitations - max_seq_length = 128 ## Results | Model | HTS2 | HTS5 | | ------------- | ------------- | ------------- | | huBERT | 85.56 | **68.99** | | XLM-RoBERTa| 85.56 | 66.50 | ## Citation If you use this model, please cite the following paper: ``` @inproceedings {laki-yang-sentiment, title = {Improving Performance of Sentence-level Sentiment Analysis with Data Augmentation Methods}, booktitle = {Proceedings of 12th IEEE International Conference on Cognitive Infocommunications (CogInfoCom 2021)}, year = {2021}, publisher = {IEEE}, address = {Online}, author = {Laki, László and Yang, Zijian Győző} pages = {417--422} } ```
{"language": ["hu"], "license": "mit", "tags": ["text-classification"], "metrics": ["accuracy"], "widget": [{"text": "J\u00f3 reggelt! majd k\u00fcld\u00f6m az \u00e9lm\u00e9nyhoz\u00f3kat :)."}]}
NYTK/sentiment-hts5-xlm-roberta-hungarian
null
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "hu", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
summarization
transformers
# Hungarian Abstractive Summarization BART model For further models, scripts and details, see [our repository](https://github.com/nytud/neural-models) or [our demo site](https://juniper.nytud.hu/demo/nlp). - BART base model (see Results Table - bold): - Pretrained on Webcorpus 2.0 - Finetuned HI corpus (hvg.hu + index.hu) - Segments: 559.162 ## Limitations - tokenized input text (tokenizer: [HuSpaCy](https://huggingface.co/huspacy)) - **max_source_length = 1024** - max_target_length = 256 ## Results | Model | HI | NOL | | ------------- | ------------- | ------------- | | BART-base-512 | 30.18/13.86/22.92 | 46.48/32.40/39.45 | | BART-base-1024| **31.86/14.59/23.79** | 47.01/32.91/39.97 | ## Citation If you use this model, please cite the following paper: ``` @inproceedings {yang-bart, title = {{BARTerezzünk! - Messze, messze, messze a világtól, - BART kísérleti modellek magyar nyelvre}}, booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia}, year = {2022}, publisher = {Szegedi Tudományegyetem, Informatikai Intézet}, address = {Szeged, Magyarország}, author = {Yang, Zijian Győző}, pages = {15--29} } ```
{"language": ["hu"], "license": "apache-2.0", "tags": ["summarization"], "metrics": ["rouge"], "widget": [{"text": "A Tisza-parti v\u00e1ros \u00e1llatkertj\u00e9ben r\u00e9g\u00f3ta tartanak szurik\u00e1t\u00e1kat ( Suricata suricatta ) , de tavaly tavaszig nem siker\u00fclt szapor\u00edtani \u0151ket , annak ellen\u00e9re , hogy t\u00e1gas h\u00e1z \u00e9s kifut\u00f3 \u00e9p\u00fclt sz\u00e1mukra - k\u00f6z\u00f6lte Veprik R\u00f3bert igazgat\u00f3 . 2010-ben alakult ki az \u00faj - h\u00e1rom Amszterdamb\u00f3l sz\u00e1rmaz\u00f3 n\u0151st\u00e9nyb\u0151l \u00e9s egy budapesti fiatal h\u00edmb\u0151l \u00e1ll\u00f3 - csapat , amely szaporodni kezdett . 2011-ben h\u00e1rom , id\u00e9n pedig egy ut\u00f3ddal \u00f6rvendeztett\u00e9k meg a gondoz\u00f3kat \u00e9s az \u00e1llatbar\u00e1tokat . A szurik\u00e1t\u00e1k ut\u00f3dai - tizenegy hetes vemhess\u00e9g ut\u00e1n - okt\u00f3ber \u00e9s m\u00e1rcius k\u00f6z\u00f6tt vakon \u00e9s sz\u0151rtelen\u00fcl j\u00f6nnek a vil\u00e1gra . A kicsinyek h\u00e1romhetesen b\u00fajnak el\u0151 az \u00fcregb\u0151l , \u00e9s nevel\u00e9s\u00fckben mindk\u00e9t sz\u00fcl\u0151 r\u00e9szt vesz . A szurik\u00e1tacsapatokban a csal\u00e1d tagjai nagyon szoros kapcsolatban \u00e1llnak egym\u00e1ssal , viszont nagyon harciasan fell\u00e9pnek az idegenekkel szemben , ak\u00e1r meg is \u00f6lhetik azt az \u00e1llatot , amelyet betolakod\u00f3nak tekintenek . B\u00e1r a D\u00e9l-Afrik\u00e1ban , a Kalah\u00e1ri sivatagban \u0151shonos cibetmacskaf\u00e9le ragadoz\u00f3kat a szegedi \u00e1llatkertben term\u00e9szetes \u00e9l\u0151hely\u00fckh\u00f6z k\u00e9pest kevesebb vesz\u00e9ly fenyegeti , a vadasparki erd\u0151ben ragadoz\u00f3 madarak is \u00e9lnek , amelyek ak\u00e1r zs\u00e1km\u00e1nyk\u00e9nt is tekinthetn\u00e9nek a szurik\u00e1t\u00e1kra . A szegedi csapatn\u00e1l azonban szigor\u00fa \u0151rs\u00e9g van , mindig lesi valaki k\u00e9t l\u00e1bra \u00e1llva a vesz\u00e9lyforr\u00e1sokat . Az \u0151rszemek figyelm\u00e9t m\u00e9g a s\u00e1rk\u00e1nyrep\u00fcl\u0151k is felkeltik , \u00e9s felbukkan\u00e1sakor valamennyi egyed biztos helyre menek\u00fcl . A szurik\u00e1t\u00e1k a Kalah\u00e1ri sivatag boz\u00f3tos , szikl\u00e1s ter\u00fcletein csapatokban \u00e9lnek . A 700 gramm k\u00f6r\u00fcli testt\u00f6meg\u0171 ragadoz\u00f3k rovarokkal , l\u00e1rv\u00e1kkal , skorpi\u00f3kkal t\u00e1pl\u00e1lkoznak , de n\u00e9ha elfogyasztj\u00e1k a kisebb gerinceseket , toj\u00e1sokat \u00e9s n\u00f6v\u00e9nyi gum\u00f3kat is . A nappal akt\u00edv \u00e1llatok f\u00f6ldalatti \u00fcregrendszert \u00e1snak , amelynek t\u00f6bb bej\u00e1rata is van . Ha a szurik\u00e1t\u00e1k idegen csapattal vagy ragadoz\u00f3val ker\u00fclnek szembe , azonnal elkezdenek \u00e1sni , nagy porfelh\u0151t kavarva . Az is gyakorta el\u0151fordul , hogy szorosan egym\u00e1shoz b\u00fajnak , felborzolj\u00e1k sz\u0151r\u00fcket , megny\u00fajtj\u00e1k test\u00fcket , hogy min\u00e9l nagyobbnak l\u00e1tsz\u00f3djanak . Az el\u0151ad\u00e1suk cs\u00facspontj\u00e1n pedig az eg\u00e9sz csapat a leveg\u0151be ugrik , k\u00f6zben pedig morog . A hangad\u00e1s egy\u00e9bk\u00e9nt is fontos a szurik\u00e1t\u00e1k kapcsolat\u00e1ban , az egyedek legal\u00e1bb t\u00edzf\u00e9le jelz\u00e9st haszn\u00e1lnak a kol\u00f3ni\u00e1n bel\u00fcl ."}]}
NYTK/summarization-hi-bart-base-1024-hungarian
null
[ "transformers", "pytorch", "bart", "text2text-generation", "summarization", "hu", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
summarization
transformers
# Hungarian Abstractive Summarization BART model For further models, scripts and details, see [our repository](https://github.com/nytud/neural-models) or [our demo site](https://juniper.nytud.hu/demo/nlp). - BART base model (see Results Table - bold): - Pretrained on Webcorpus 2.0 - Finetuned HI corpus (hvg.hu + index.hu) - Segments: 559.162 ## Limitations - tokenized input text (tokenizer: [HuSpaCy](https://huggingface.co/huspacy)) - max_source_length = 512 - max_target_length = 256 ## Results | Model | HI | NOL | | ------------- | ------------- | ------------- | | BART-base-512 | **30.18/13.86/22.92** | 46.48/32.40/39.45 | | BART-base-1024| 31.86/14.59/23.79 | 47.01/32.91/39.97 | ## Citation If you use this model, please cite the following paper: ``` @inproceedings {yang-bart, title = {{BARTerezzünk! - Messze, messze, messze a világtól, - BART kísérleti modellek magyar nyelvre}}, booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia}, year = {2022}, publisher = {Szegedi Tudományegyetem, Informatikai Intézet}, address = {Szeged, Magyarország}, author = {Yang, Zijian Győző}, pages = {15--29} } ```
{"language": ["hu"], "license": "apache-2.0", "tags": ["summarization"], "metrics": ["rouge"], "widget": [{"text": "A Tisza-parti v\u00e1ros \u00e1llatkertj\u00e9ben r\u00e9g\u00f3ta tartanak szurik\u00e1t\u00e1kat ( Suricata suricatta ) , de tavaly tavaszig nem siker\u00fclt szapor\u00edtani \u0151ket , annak ellen\u00e9re , hogy t\u00e1gas h\u00e1z \u00e9s kifut\u00f3 \u00e9p\u00fclt sz\u00e1mukra - k\u00f6z\u00f6lte Veprik R\u00f3bert igazgat\u00f3 . 2010-ben alakult ki az \u00faj - h\u00e1rom Amszterdamb\u00f3l sz\u00e1rmaz\u00f3 n\u0151st\u00e9nyb\u0151l \u00e9s egy budapesti fiatal h\u00edmb\u0151l \u00e1ll\u00f3 - csapat , amely szaporodni kezdett . 2011-ben h\u00e1rom , id\u00e9n pedig egy ut\u00f3ddal \u00f6rvendeztett\u00e9k meg a gondoz\u00f3kat \u00e9s az \u00e1llatbar\u00e1tokat . A szurik\u00e1t\u00e1k ut\u00f3dai - tizenegy hetes vemhess\u00e9g ut\u00e1n - okt\u00f3ber \u00e9s m\u00e1rcius k\u00f6z\u00f6tt vakon \u00e9s sz\u0151rtelen\u00fcl j\u00f6nnek a vil\u00e1gra . A kicsinyek h\u00e1romhetesen b\u00fajnak el\u0151 az \u00fcregb\u0151l , \u00e9s nevel\u00e9s\u00fckben mindk\u00e9t sz\u00fcl\u0151 r\u00e9szt vesz . A szurik\u00e1tacsapatokban a csal\u00e1d tagjai nagyon szoros kapcsolatban \u00e1llnak egym\u00e1ssal , viszont nagyon harciasan fell\u00e9pnek az idegenekkel szemben , ak\u00e1r meg is \u00f6lhetik azt az \u00e1llatot , amelyet betolakod\u00f3nak tekintenek . B\u00e1r a D\u00e9l-Afrik\u00e1ban , a Kalah\u00e1ri sivatagban \u0151shonos cibetmacskaf\u00e9le ragadoz\u00f3kat a szegedi \u00e1llatkertben term\u00e9szetes \u00e9l\u0151hely\u00fckh\u00f6z k\u00e9pest kevesebb vesz\u00e9ly fenyegeti , a vadasparki erd\u0151ben ragadoz\u00f3 madarak is \u00e9lnek , amelyek ak\u00e1r zs\u00e1km\u00e1nyk\u00e9nt is tekinthetn\u00e9nek a szurik\u00e1t\u00e1kra . A szegedi csapatn\u00e1l azonban szigor\u00fa \u0151rs\u00e9g van , mindig lesi valaki k\u00e9t l\u00e1bra \u00e1llva a vesz\u00e9lyforr\u00e1sokat . Az \u0151rszemek figyelm\u00e9t m\u00e9g a s\u00e1rk\u00e1nyrep\u00fcl\u0151k is felkeltik , \u00e9s felbukkan\u00e1sakor valamennyi egyed biztos helyre menek\u00fcl . A szurik\u00e1t\u00e1k a Kalah\u00e1ri sivatag boz\u00f3tos , szikl\u00e1s ter\u00fcletein csapatokban \u00e9lnek . A 700 gramm k\u00f6r\u00fcli testt\u00f6meg\u0171 ragadoz\u00f3k rovarokkal , l\u00e1rv\u00e1kkal , skorpi\u00f3kkal t\u00e1pl\u00e1lkoznak , de n\u00e9ha elfogyasztj\u00e1k a kisebb gerinceseket , toj\u00e1sokat \u00e9s n\u00f6v\u00e9nyi gum\u00f3kat is . A nappal akt\u00edv \u00e1llatok f\u00f6ldalatti \u00fcregrendszert \u00e1snak , amelynek t\u00f6bb bej\u00e1rata is van . Ha a szurik\u00e1t\u00e1k idegen csapattal vagy ragadoz\u00f3val ker\u00fclnek szembe , azonnal elkezdenek \u00e1sni , nagy porfelh\u0151t kavarva . Az is gyakorta el\u0151fordul , hogy szorosan egym\u00e1shoz b\u00fajnak , felborzolj\u00e1k sz\u0151r\u00fcket , megny\u00fajtj\u00e1k test\u00fcket , hogy min\u00e9l nagyobbnak l\u00e1tsz\u00f3djanak . Az el\u0151ad\u00e1suk cs\u00facspontj\u00e1n pedig az eg\u00e9sz csapat a leveg\u0151be ugrik , k\u00f6zben pedig morog . A hangad\u00e1s egy\u00e9bk\u00e9nt is fontos a szurik\u00e1t\u00e1k kapcsolat\u00e1ban , az egyedek legal\u00e1bb t\u00edzf\u00e9le jelz\u00e9st haszn\u00e1lnak a kol\u00f3ni\u00e1n bel\u00fcl ."}]}
NYTK/summarization-hi-bart-hungarian
null
[ "transformers", "pytorch", "bart", "text2text-generation", "summarization", "hu", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
summarization
transformers
# Hungarian Abstractive Summarization BART model For further models, scripts and details, see [our repository](https://github.com/nytud/neural-models) or [our demo site](https://juniper.nytud.hu/demo/nlp). - BART base model (see Results Table - bold): - Pretrained on Webcorpus 2.0 - Finetuned NOL corpus (nol.hu) - Segments: 397,343 ## Limitations - tokenized input text (tokenizer: [HuSpaCy](https://huggingface.co/huspacy)) - max_source_length = 512 - max_target_length = 256 ## Results | Model | HI | NOL | | ------------- | ------------- | ------------- | | BART-base-512 | 30.18/13.86/22.92 | **46.48/32.40/39.45** | | BART-base-1024| 31.86/14.59/23.79 | 47.01/32.91/39.97 | ## Citation If you use this model, please cite the following paper: ``` @inproceedings {yang-bart, title = {{BARTerezzünk! - Messze, messze, messze a világtól, - BART kísérleti modellek magyar nyelvre}}, booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia}, year = {2022}, publisher = {Szegedi Tudományegyetem, Informatikai Intézet}, address = {Szeged, Magyarország}, author = {Yang, Zijian Győző}, pages = {15--29} } ```
{"language": ["hu"], "license": "apache-2.0", "tags": ["summarization"], "metrics": ["rouge"], "widget": [{"text": "A Tisza-parti v\u00e1ros \u00e1llatkertj\u00e9ben r\u00e9g\u00f3ta tartanak szurik\u00e1t\u00e1kat ( Suricata suricatta ) , de tavaly tavaszig nem siker\u00fclt szapor\u00edtani \u0151ket , annak ellen\u00e9re , hogy t\u00e1gas h\u00e1z \u00e9s kifut\u00f3 \u00e9p\u00fclt sz\u00e1mukra - k\u00f6z\u00f6lte Veprik R\u00f3bert igazgat\u00f3 . 2010-ben alakult ki az \u00faj - h\u00e1rom Amszterdamb\u00f3l sz\u00e1rmaz\u00f3 n\u0151st\u00e9nyb\u0151l \u00e9s egy budapesti fiatal h\u00edmb\u0151l \u00e1ll\u00f3 - csapat , amely szaporodni kezdett . 2011-ben h\u00e1rom , id\u00e9n pedig egy ut\u00f3ddal \u00f6rvendeztett\u00e9k meg a gondoz\u00f3kat \u00e9s az \u00e1llatbar\u00e1tokat . A szurik\u00e1t\u00e1k ut\u00f3dai - tizenegy hetes vemhess\u00e9g ut\u00e1n - okt\u00f3ber \u00e9s m\u00e1rcius k\u00f6z\u00f6tt vakon \u00e9s sz\u0151rtelen\u00fcl j\u00f6nnek a vil\u00e1gra . A kicsinyek h\u00e1romhetesen b\u00fajnak el\u0151 az \u00fcregb\u0151l , \u00e9s nevel\u00e9s\u00fckben mindk\u00e9t sz\u00fcl\u0151 r\u00e9szt vesz . A szurik\u00e1tacsapatokban a csal\u00e1d tagjai nagyon szoros kapcsolatban \u00e1llnak egym\u00e1ssal , viszont nagyon harciasan fell\u00e9pnek az idegenekkel szemben , ak\u00e1r meg is \u00f6lhetik azt az \u00e1llatot , amelyet betolakod\u00f3nak tekintenek . B\u00e1r a D\u00e9l-Afrik\u00e1ban , a Kalah\u00e1ri sivatagban \u0151shonos cibetmacskaf\u00e9le ragadoz\u00f3kat a szegedi \u00e1llatkertben term\u00e9szetes \u00e9l\u0151hely\u00fckh\u00f6z k\u00e9pest kevesebb vesz\u00e9ly fenyegeti , a vadasparki erd\u0151ben ragadoz\u00f3 madarak is \u00e9lnek , amelyek ak\u00e1r zs\u00e1km\u00e1nyk\u00e9nt is tekinthetn\u00e9nek a szurik\u00e1t\u00e1kra . A szegedi csapatn\u00e1l azonban szigor\u00fa \u0151rs\u00e9g van , mindig lesi valaki k\u00e9t l\u00e1bra \u00e1llva a vesz\u00e9lyforr\u00e1sokat . Az \u0151rszemek figyelm\u00e9t m\u00e9g a s\u00e1rk\u00e1nyrep\u00fcl\u0151k is felkeltik , \u00e9s felbukkan\u00e1sakor valamennyi egyed biztos helyre menek\u00fcl . A szurik\u00e1t\u00e1k a Kalah\u00e1ri sivatag boz\u00f3tos , szikl\u00e1s ter\u00fcletein csapatokban \u00e9lnek . A 700 gramm k\u00f6r\u00fcli testt\u00f6meg\u0171 ragadoz\u00f3k rovarokkal , l\u00e1rv\u00e1kkal , skorpi\u00f3kkal t\u00e1pl\u00e1lkoznak , de n\u00e9ha elfogyasztj\u00e1k a kisebb gerinceseket , toj\u00e1sokat \u00e9s n\u00f6v\u00e9nyi gum\u00f3kat is . A nappal akt\u00edv \u00e1llatok f\u00f6ldalatti \u00fcregrendszert \u00e1snak , amelynek t\u00f6bb bej\u00e1rata is van . Ha a szurik\u00e1t\u00e1k idegen csapattal vagy ragadoz\u00f3val ker\u00fclnek szembe , azonnal elkezdenek \u00e1sni , nagy porfelh\u0151t kavarva . Az is gyakorta el\u0151fordul , hogy szorosan egym\u00e1shoz b\u00fajnak , felborzolj\u00e1k sz\u0151r\u00fcket , megny\u00fajtj\u00e1k test\u00fcket , hogy min\u00e9l nagyobbnak l\u00e1tsz\u00f3djanak . Az el\u0151ad\u00e1suk cs\u00facspontj\u00e1n pedig az eg\u00e9sz csapat a leveg\u0151be ugrik , k\u00f6zben pedig morog . A hangad\u00e1s egy\u00e9bk\u00e9nt is fontos a szurik\u00e1t\u00e1k kapcsolat\u00e1ban , az egyedek legal\u00e1bb t\u00edzf\u00e9le jelz\u00e9st haszn\u00e1lnak a kol\u00f3ni\u00e1n bel\u00fcl ."}]}
NYTK/summarization-nol-bart-hungarian
null
[ "transformers", "pytorch", "bart", "text2text-generation", "summarization", "hu", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Hungarian GPT-2 news generator For further models, scripts and details, see [our repository](https://github.com/nytud/neural-models) or [our demo site](https://juniper.nytud.hu/demo/nlp). - Pretrained on Hungarian Wikipedia - Finetuned on hin corpus (hvg.hu, index.hu, nol.hu) ## Results | Model | Perplexity | | ------------- | ------------- | | GPT-2 poem | 47.46 | | **GPT-2 news** | **22.06** | ## Citation If you use this model, please cite the following paper: ``` @inproceedings {yang-gpt2, title = {{"Az invazív medvék nem tolerálják a suzukis agressziót" - Magyar GPT-2 kísérleti modell}}, booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia}, year = {2022}, publisher = {Szegedi Tudományegyetem, Informatikai Intézet}, address = {Szeged, Magyarország}, author = {Yang, Zijian Győző}, pages = {463--476} } ```
{"language": ["hu"], "license": "mit", "tags": ["text-generation"], "widget": [{"text": "Szeptember v\u00e9g\u00e9n z\u00e1rul a balatoni szezon"}]}
NYTK/text-generation-news-gpt2-small-hungarian
null
[ "transformers", "pytorch", "gpt2", "text-generation", "hu", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Hungarian GPT-2 poem generator in Petőfi style For further models, scripts and details, see [our repository](https://github.com/nytud/neural-models) or [our demo site](https://juniper.nytud.hu/demo/nlp). - Pretrained on Hungarian Wikipedia - Finetuned on Petőfi Sándor összes költeményei ## Results | Model | Perplexity | | ------------- | ------------- | | **GPT-2 poem** | **47.46** | | GPT-2 news | 22.06 | ## Citation If you use this model, please cite the following paper: ``` @inproceedings {yang-gpt2, title = {{"Az invazív medvék nem tolerálják a suzukis agressziót" - Magyar GPT-2 kísérleti modell}}, booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia}, year = {2022}, publisher = {Szegedi Tudományegyetem, Informatikai Intézet}, address = {Szeged, Magyarország}, author = {Yang, Zijian Győző}, pages = {463--476} } ```
{"language": ["hu"], "license": "mit", "tags": ["text-generation"], "widget": [{"text": "Szegeden, janu\u00e1r v\u00e9g\u00e9n,"}]}
NYTK/text-generation-poem-petofi-gpt2-small-hungarian
null
[ "transformers", "pytorch", "gpt2", "text-generation", "hu", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
# BART Translation model For further models, scripts and details, see [our repository](https://github.com/nytud/machine-translation) or [our demo site](https://juniper.nytud.hu/demo/nlp). - Source language: English - Target language: Hungarian - BART base model: - Pretrained on English WikiText-103 and Hungarian Wikipedia - Finetuned on subcorpora from OPUS - Segments: 56.837.602 ## Limitations - tokenized input text (tokenizer: [HuSpaCy](https://huggingface.co/huspacy)) - max_source_length = 128 - max_target_length = 128 ## Results | Model | BLEU | chrF-3 | chrF-6 | | ------------- | ------------- | ------------- | ------------- | | Google | 25.30 | 54.09 | 49.0 | | **BART** | **36.89** | **60.77** | **56.4** | | mT5 | 27.69 | 53.73 | 48.57 | ## Citation If you use this model, please cite the following paper: ``` @inproceedings {laki-yang-mt, title = {{Jobban fordítunk magyarra, mint a Google!}}, booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia}, year = {2022}, publisher = {Szegedi Tudományegyetem, Informatikai Intézet}, address = {Szeged, Magyarország}, author = {Laki, László and Yang, Zijian Győző}, pages = {357--372} } ```
{"language": ["en", "hu"], "license": "apache-2.0", "tags": ["translation"], "metrics": ["sacrebleu", "chrf"], "widget": [{"text": "This may not make much sense to you, sir, but I'd like to ask your permission to date your daughter.", "example_title": "Translation: English-Hungarian"}]}
NYTK/translation-bart-128-en-hu
null
[ "transformers", "pytorch", "bart", "text2text-generation", "translation", "en", "hu", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
# BART Translation model For further models, scripts and details, see [our repository](https://github.com/nytud/machine-translation) or [our demo site](https://juniper.nytud.hu/demo/nlp). - Source language: English - Target language: Hungarian - Pretrained on English WikiText-103 and Hungarian Wikipedia - Finetuned on subcorpora from OPUS - Segments: 56.837.602 ## Limitations - tokenized input text (tokenizer: [HuSpaCy](https://huggingface.co/huspacy)) ## Results | Model | BLEU | chrF-3 | | ------------- | ------------- | ------------- | | Google en-hu | 25.30 | 54.08 | | **BART-base-enhu** | **34.38** | **58.88** | | Google hu-en| 34.48 | 59.59 | | **BART-base-huen** | **38.03** | **61,37** | ## Citation If you use this model, please cite the following paper: ``` @inproceedings {yang-bart, title = {{BARTerezzünk! Messze, messze, messze a világtól, - BART kísérleti modellek magyar nyelvre}}, booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia}, year = {2022}, publisher = {Szegedi Tudományegyetem, Informatikai Intézet}, address = {Szeged, Magyarország}, author = {Yang, Zijian Győző}, pages = {15--29} } ```
{"language": ["en", "hu"], "license": "apache-2.0", "tags": ["translation"], "metrics": ["sacrebleu", "chrf"], "widget": [{"text": "This may not make much sense to you, sir, but I'd like to ask your permission to date your daughter.", "example_title": "Translation: English-Hungarian"}]}
NYTK/translation-bart-en-hu
null
[ "transformers", "pytorch", "bart", "text2text-generation", "translation", "en", "hu", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
# BART Translation model For further models, scripts and details, see [our repository](https://github.com/nytud/machine-translation) or [our demo site](https://juniper.nytud.hu/demo/nlp). - Source language: Hungarian - Target language: English - Pretrained on English WikiText-103 and Hungarian Wikipedia - Finetuned on subcorpora from OPUS - Segments: 56.837.602 ## Limitations - tokenized input text (tokenizer: [HuSpaCy](https://huggingface.co/huspacy)) ## Results | Model | BLEU | chrF-3 | | ------------- | ------------- | ------------- | | Google en-hu | 25.30 | 54.08 | | **BART-base-enhu** | **34.38** | **58.88** | | Google hu-en| 34.48 | 59.59 | | **BART-base-huen** | **38.03** | **61,37** | ## Citation If you use this model, please cite the following paper: ``` @inproceedings {yang-bart, title = {{BARTerezzünk! - Messze, messze, messze a világtól, - BART kísérleti modellek magyar nyelvre}}, booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia}, year = {2022}, publisher = {Szegedi Tudományegyetem, Informatikai Intézet}, address = {Szeged, Magyarország}, author = {Yang, Zijian Győző}, pages = {15--29} } ```
{"language": ["hu", "en"], "license": "apache-2.0", "tags": ["translation"], "metrics": ["sacrebleu", "chrf"], "widget": [{"text": "Szeretn\u00e9m megragadni az alkalmat uram, hogy az enged\u00e9ly\u00e9t k\u00e9rjem, hogy tal\u00e1lkozhassak a l\u00e1ny\u00e1val."}]}
NYTK/translation-bart-hu-en
null
[ "transformers", "pytorch", "bart", "text2text-generation", "translation", "hu", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
# Marian Translation model For further models, scripts and details, see [our repository](https://github.com/nytud/machine-translation) or [our demo site](https://juniper.nytud.hu/demo/nlp). There is a description of the REST API of our service. This model has been traind with a [MarianNMT](https://github.com/marian-nmt/marian-dev) v1.10.23; commit: 42f0b8b7 transformer-big typed environment. This repository contains our translation model (en-hu) which were published in MSZNY 2022 conference. - Source language: English - Target language: Hungarian - Pretrained on subcorpora from OPUS - Segments: 56.837.602 ## Limitations ## Results | Model | BLEU | chrF-3 | | ------------- | ------------- | ------------- | | Google en-hu | 25.30 | 54.08 | | **Marian-big-enhu** | **37.30** | **61.61** | ## Citation If you use this model, please cite the following paper: ``` @inproceedings {laki-yang-mt, title = {{Jobban fordítunk magyarra, mint a Google!}}, booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia}, year = {2022}, publisher = {Szegedi Tudományegyetem, Informatikai Intézet}, address = {Szeged, Magyarország}, author = {Laki, László and Yang, Zijian Győző}, pages = {357--372} } ```
{"language": ["en", "hu"], "license": "gpl-3.0", "tags": ["translation"], "metrics": ["sacrebleu", "chrf"], "widget": [{"text": "This may not make much sense to you, sir, but I'd like to ask your permission to date your daughter.", "example_title": "Translation: English-Hungarian"}]}
NYTK/translation-marianmt-en-hu
null
[ "transformers", "pytorch", "marian", "text2text-generation", "translation", "en", "hu", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
# mT5 Translation model For further models, scripts and details, see [our repository](https://github.com/nytud/machine-translation) or [our demo site](https://juniper.nytud.hu/demo/nlp). - Source language: English - Target language: Hungarian - Pretrained model used: mT5-small - Finetuned on subcorpora from OPUS - Segments: 56.837.602 - prefix: "translate English to Hungarian: " ## Limitations - tokenized input text (tokenizer: [HuSpaCy](https://huggingface.co/huspacy)) - max_source_length = 128 - max_target_length = 128 ## Results | Model | BLEU | chrF-3 | chrF-6 | | ------------- | ------------- | ------------- | ------------- | | Google en-hu | 25.30 | 54.08 | 49.06 | | BART | 36.89 | 60.77 | 56.4 | | **mT5** | **27.69** | **53.73** | **48.57** | ## Citation If you use this model, please cite the following paper: ``` @inproceedings {laki-yang-mt, title = {{Jobban fordítunk magyarra, mint a Google!}}, booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia}, year = {2022}, publisher = {Szegedi Tudományegyetem, Informatikai Intézet}, address = {Szeged, Magyarország}, author = {Laki, László and Yang, Zijian Győző}, pages = {357--372} } ```
{"language": ["en", "hu"], "license": "apache-2.0", "tags": ["translation"], "metrics": ["sacrebleu", "chrf"], "widget": [{"text": "translate English to Hungarian: This may not make much sense to you, sir, but I'd like to ask your permission to date your daughter."}]}
NYTK/translation-mt5-small-128-en-hu
null
[ "transformers", "pytorch", "mt5", "text2text-generation", "translation", "en", "hu", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
NaDy/ko-mrc-model
null
[ "transformers", "pytorch", "roberta", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# My Awesome Model
{"tags": ["conversational"]}
nabarun/DialoGPT-small-joshua
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Nadiazabeti1/model_name
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Nagnew/clean_data
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
NahedAbdelgaber/distilbert-base-uncased-1024-finetuned-evaluating-student-writing
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-down-sampled-evaluating-student-writing This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3408 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.5869 | 1.0 | 157 | 2.3949 | | 2.4142 | 2.0 | 314 | 2.3551 | | 2.3792 | 3.0 | 471 | 2.2840 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-down-sampled-evaluating-student-writing", "results": []}]}
NahedAbdelgaber/distilbert-base-uncased-finetuned-down-sampled-evaluating-student-writing
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-evaluating-student-writing This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9917 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.3485 | 1.0 | 878 | 2.0959 | | 2.1407 | 2.0 | 1756 | 2.0162 | | 2.0843 | 3.0 | 2634 | 1.9846 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-evaluating-student-writing", "results": []}]}
NahedAbdelgaber/distilbert-base-uncased-finetuned-evaluating-student-writing
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
NahedAbdelgaber/distilbert-base-uncased-finetuned-imdb
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # evaluating-student-writing-distibert-ner-with-metric This model is a fine-tuned version of [NahedAbdelgaber/evaluating-student-writing-distibert-ner](https://huggingface.co/NahedAbdelgaber/evaluating-student-writing-distibert-ner) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7535 - Precision: 0.0614 - Recall: 0.2590 - F1: 0.0993 - Accuracy: 0.6188 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.7145 | 1.0 | 1755 | 0.7683 | 0.0546 | 0.2194 | 0.0875 | 0.6191 | | 0.6608 | 2.0 | 3510 | 0.7504 | 0.0570 | 0.2583 | 0.0934 | 0.6136 | | 0.5912 | 3.0 | 5265 | 0.7535 | 0.0614 | 0.2590 | 0.0993 | 0.6188 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "evaluating-student-writing-distibert-ner-with-metric", "results": []}]}
NahedAbdelgaber/evaluating-student-writing-distibert-ner-with-metric
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # evaluating-student-writing-distibert-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7688 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.871 | 1.0 | 1755 | 0.8158 | | 0.7476 | 2.0 | 3510 | 0.7688 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "evaluating-student-writing-distibert-ner", "results": []}]}
NahedAbdelgaber/evaluating-student-writing-distibert-ner
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Nailcan/nailcankara-wav2vec2-large-xls-r-300m-turkish-colab
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
{}
NakHyun/electra_kr_v1-discriminator
null
[ "transformers", "pytorch", "electra", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
NakHyun/electra_kr_v1-generator
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
Test from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("NakHyun/electra_kr_v1") model = AutoModel.from_pretrained("NakHyun/electra_kr_v1")
{}
NakHyun/electra_kr_v1
null
[ "transformers", "pytorch", "electra", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8239 - Matthews Correlation: 0.5495 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5235 | 1.0 | 535 | 0.5402 | 0.4156 | | 0.3484 | 2.0 | 1070 | 0.5272 | 0.5233 | | 0.2381 | 3.0 | 1605 | 0.6665 | 0.5050 | | 0.1746 | 4.0 | 2140 | 0.7512 | 0.5429 | | 0.1308 | 5.0 | 2675 | 0.8239 | 0.5495 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5494735380761103, "name": "Matthews Correlation"}]}]}]}
NaliniK/distilbert-base-uncased-finetuned-cola
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
# Configuration `title`: _string_ Display title for the Space `emoji`: _string_ Space emoji (emoji-only character allowed) `colorFrom`: _string_ Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) `colorTo`: _string_ Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) `sdk`: _string_ Can be either `gradio`, `streamlit`, or `static` `sdk_version` : _string_ Only applicable for `streamlit` SDK. See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. `app_file`: _string_ Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). Path is relative to the root of the repository. `pinned`: _boolean_ Whether the Space stays on top of your list.
{"title": "Pdf Table Extractor To CSV", "emoji": ";)", "colorFrom": "yellow", "colorTo": "green", "sdk": "streamlit", "app_file": "App_For_PDF_To_Dataframe.py", "pinned": false}
Nalla/PDF_To_CSV
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Aqua from Konosuba DialoGPT Model
{"tags": ["conversational"]}
NamPE/DialoGPT-medium-Aqua-konosuba
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Takanashi Rikka DialoGPT Model
{"tags": ["conversational"]}
NamPE/DialoGPT-medium-Takanashi-Rikka
null
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Satou Hina DialoGPT Model
{"tags": ["conversational"]}
NamPE/DialoGPT-small-satouhina
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Namk/bert-base-uncased-finetuned-swag
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Nanad/Nanad
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Nanci/relation_extraction
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Bapibot
{"tags": ["conversational"]}
NanniKirby/DialoGPT-medium-bapi
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Bapibot
{"tags": ["conversational"]}
NanniKirby/bapismall
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Narender/en-hi-retrained
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
NareshPS/test
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Narrativa/bert-tiny-finetuned-fake-news-detection
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
summarization
transformers
# Spanish RoBERTa2RoBERTa (roberta-base-bne) fine-tuned on MLSUM ES for summarization ## Model [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) (RoBERTa Checkpoint) ## Dataset **MLSUM** is the first large-scale MultiLingual SUMmarization dataset. Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, French, German, **Spanish**, Russian, Turkish. Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community. We report cross-lingual comparative analyses based on state-of-the-art systems. These highlight existing biases which motivate the use of a multi-lingual dataset. [MLSUM es](https://huggingface.co/datasets/viewer/?dataset=mlsum) ## Results |Set|Metric| Value| |----|------|------| | Test |Rouge2 - mid -precision | 11.42| | Test | Rouge2 - mid - recall | 10.58 | | Test | Rouge2 - mid - fmeasure | 10.69| | Test | Rouge1 - fmeasure | 28.83 | | Test | RougeL - fmeasure | 23.15 | Raw metrics using HF/metrics `rouge`: ```python rouge = datasets.load_metric("rouge") rouge.compute(predictions=results["pred_summary"], references=results["summary"]) {'rouge1': AggregateScore(low=Score(precision=0.30393366820245, recall=0.27905239591639935, fmeasure=0.283148902808752), mid=Score(precision=0.3068521142101569, recall=0.2817252494122592, fmeasure=0.28560373425206464), high=Score(precision=0.30972608774202665, recall=0.28458152325781716, fmeasure=0.2883786700591887)), 'rougeL': AggregateScore(low=Score(precision=0.24184668819794716, recall=0.22401171380621518, fmeasure=0.22624104698839514), mid=Score(precision=0.24470388406868163, recall=0.22665793214539162, fmeasure=0.2289118878817394), high=Score(precision=0.2476594458951327, recall=0.22932683203591905, fmeasure=0.23153001570662513))} rouge.compute(predictions=results["pred_summary"], references=results["summary"], rouge_types=["rouge2"])["rouge2"].mid Score(precision=0.11423200347113865, recall=0.10588038944902506, fmeasure=0.1069921217219595) ``` ## Usage ```python import torch from transformers import RobertaTokenizerFast, EncoderDecoderModel device = 'cuda' if torch.cuda.is_available() else 'cpu' ckpt = 'Narrativa/bsc_roberta2roberta_shared-spanish-finetuned-mlsum-summarization' tokenizer = RobertaTokenizerFast.from_pretrained(ckpt) model = EncoderDecoderModel.from_pretrained(ckpt).to(device) def generate_summary(text): inputs = tokenizer([text], padding="max_length", truncation=True, max_length=512, return_tensors="pt") input_ids = inputs.input_ids.to(device) attention_mask = inputs.attention_mask.to(device) output = model.generate(input_ids, attention_mask=attention_mask) return tokenizer.decode(output[0], skip_special_tokens=True) text = "Your text here..." generate_summary(text) ``` Created by: [Narrativa](https://www.narrativa.com/) About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI
{"language": "es", "tags": ["summarization", "news"], "datasets": ["mlsum"], "widget": [{"text": "Al filo de las 22.00 horas del jueves, la Asamblea de Madrid vive un momento sorprendente: Vox decide no apoyar una propuesta del PP en favor del blindaje fiscal de la Comunidad. Se ha roto la unidad de los tres partidos de derechas. Es un hecho excepcional. Desde que arranc\u00f3 la legislatura, PP, Cs y Vox han votado en bloque casi el 75% de las veces en el pleno de la C\u00e1mara. Juntos decidieron la composici\u00f3n de la Mesa de la Asamblea. Juntos invistieron presidenta a Isabel D\u00edaz Ayuso. Y juntos han votado la mayor\u00eda de proposiciones no de ley, incluida la que ha marcado el esprint final de la campa\u00f1a para las elecciones generales: acaban de instar al Gobierno de Espa\u00f1a a \"la ilegalizaci\u00f3n inmediata\" de los partidos separatistas \"que atenten contra la unidad de la Naci\u00f3n\". Los cr\u00edticos de Cs no comparten el apoyo al texto de Vox contra el secesionisimo Ese balance retrata una necesidad antes que una complicidad, seg\u00fan fuentes del PP con predicamento en la direcci\u00f3n regional y nacional. Tras casi 15 a\u00f1os gobernando con mayor\u00eda absoluta, la formaci\u00f3n conservadora vivi\u00f3 como una tortura la pasada legislatura, en la que dependi\u00f3 de Cs para sacar adelante sus iniciativas. El problema se agudiz\u00f3 tras las elecciones auton\u00f3micas de mayo. El PP ha tenido que formar con Cs el primer gobierno de coalici\u00f3n de la historia de la regi\u00f3n, y ni siquiera con eso le basta para ganar las votaciones de la C\u00e1mara. Los dos socios gubernamentales necesitan a Vox, la menos predecible de las tres formaciones. \"Tenemos que trabajar juntos defendiendo la unidad del pa\u00eds, por eso no quisimos dejar a Vox solo\", dijo ayer D\u00edaz Ayuso para justificar el apoyo de PP y Cs a la proposici\u00f3n de la extrema derecha sobre Catalu\u00f1a. \"Despu\u00e9s nosotros llev\u00e1bamos otra proposici\u00f3n para defender el blindaje fiscal de Madrid, y ah\u00ed Vox nos dej\u00f3 atr\u00e1s. No permiti\u00f3 que esto saliera. Es un grave error por su parte\", prosigui\u00f3, recalcando el enfado del PP. \"Demuestra que est\u00e1 m\u00e1s en cuestiones electoralistas\", subray\u00f3. \"Los que pensamos, con nuestras inmensas diferencias, que tenemos cosas en com\u00fan que nos unen como partidos que queremos Comunidades libres, con bajos impuestos, en las que se viva con seguridad y en paz, tenemos que estar unidos\", argument\u00f3. \"Y por lo menos nosotros de nuestra l\u00ednea no nos separamos\". Al contrario de lo que est\u00e1 ocurriendo el Ayuntamiento de Madrid, donde el PP y Cs ya han defendido posiciones de voto distintas, pese a compartir el Gobierno, en la Asamblea los partidos de D\u00edaz Ayuso e Ignacio Aguado est\u00e1n actuando con la m\u00e1xima lealtad en las votaciones del pleno. Otra cosa son las comisiones. Y el caso Avalmadrid. Es en ese terreno donde Cs y Vox est\u00e1n buscando el margen de maniobra necesario para separarse del PP en plena campa\u00f1a electoral, abandonando a su suerte a su socio para distinguirse ante los electores. \u2014\"Usted me ha dejado tirada\", le espet\u00f3 la presidenta de la Comunidad de Madrid a Roc\u00edo Monasterio tras saber que Vox permitir\u00eda que la izquierda tuviera mayor\u00eda en la comisi\u00f3n parlamentaria que investigar\u00e1 los avales concedidos por la empresa semip\u00fablica entre 2007 y 2018, lo que podr\u00eda incluir el de 400.000 euros aprobado en 2011, y nunca devuelto al completo, para una empresa participada por el padre de Isabel D\u00edaz Ayuso. \"Monasterio no es de fiar. Dice una cosa y hace la contraria\", dice una fuente popular sobre las negociaciones mantenidas para repartirse los puestos de las diferentes comisiones, que Vox no cumpli\u00f3 tras buscar un segundo pacto con otras formaciones (que no lleg\u00f3 a buen puerto). Ilegalizaci\u00f3n de Vox Los tres partidos de derechas tambi\u00e9n se han enfrentado por la ubicaci\u00f3n de Vox en el pleno. Las largas negociaciones para la investidura de D\u00edaz Ayuso dejaron heridas abiertas. Y los diputados de Cs no desaprovechan la oportunidad de lanzar dardos contra los de Vox, pero luego coinciden con ellos en la mayor\u00eda de votaciones. Ocurri\u00f3, por ejemplo, el jueves, cuando se debat\u00eda la pol\u00e9mica proposici\u00f3n para instar al Gobierno nacional a ilegalizar a los partidos separatistas que atenten contra la unidad de Espa\u00f1a. \u2014\"Mostrar nuestra sorpresa ante la presentaci\u00f3n por parte de Vox de esta propuesta\", lanz\u00f3 Araceli G\u00f3mez, diputada de la formaci\u00f3n de Aguado. \"Sorprende que planteen ustedes este asunto cuando est\u00e1 tambi\u00e9n sobre la mesa el debate de su propia ilegalizaci\u00f3n por atentar contra el ordenamiento jur\u00eddico o contra valores constitucionales como la igualdad o la no discriminaci\u00f3n\". Luego de esa descalificaci\u00f3n, y ante la incredulidad de los diputados de los partidos de izquierdas, Cs uni\u00f3 sus votos a los de Vox y a los del PP. La decisi\u00f3n ha provocado pol\u00e9mica interna, como demuestra que Albert Rivera no la apoyara ayer expl\u00edcitamente. Tampoco ha sido bien acogida por el sector cr\u00edtico de la formaci\u00f3n. Pero ha demostrado una cosa: en Madrid hay tres partidos que casi siempre votan como uno."}]}
Narrativa/bsc_roberta2roberta_shared-spanish-finetuned-mlsum-summarization
null
[ "transformers", "pytorch", "encoder-decoder", "text2text-generation", "summarization", "news", "es", "dataset:mlsum", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# ByT5-base fine-tuned for Question Answering (on Tweets) [ByT5](https://huggingface.co/google/byt5-base) base fine-tuned on [TweetQA](https://huggingface.co/datasets/tweet_qa) dataset for **Question Answering** downstream task. # Details of ByT5 - Base 🧠 ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-base). ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is usable on a downstream task. ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-base` significantly outperforms [mt5-base](https://huggingface.co/google/mt5-base) on [TweetQA](https://arxiv.org/abs/1907.06292). Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel* ## Details of the downstream task (Question Answering) - Dataset 📚 [TweetQA](https://huggingface.co/datasets/tweet_qa) With social media becoming increasingly more popular, lots of news and real-time events are being covered. Developing automated question answering systems is critical to the effectiveness of many applications that rely on real-time knowledge. While previous question answering (QA) datasets have focused on formal text such as news and Wikipedia, we present the first large-scale dataset for QA over social media data. To make sure that the tweets are meaningful and contain interesting information, we gather tweets used by journalists to write news articles. We then ask human annotators to write questions and answers upon these tweets. Unlike other QA datasets like SQuAD (in which the answers are extractive), we allow the answers to be abstractive. The task requires the model to read a short tweet and a question and outputs a text phrase (does not need to be in the tweet) as the answer. - Data Instances: Sample ```json { "Question": "who is the tallest host?", "Answer": ["sam bee","sam bee"], "Tweet": "Don't believe @ConanOBrien's height lies. Sam Bee is the tallest host in late night. #alternativefacts\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\u2014 Full Frontal (@FullFrontalSamB) January 22, 2017", "qid": "3554ee17d86b678be34c4dc2c04e334f" } ``` - Data Fields: *Question*: a question based on information from a tweet *Answer*: list of possible answers from the tweet *Tweet*: source tweet *qid*: question id ## Model in Action 🚀 ```sh git clone https://github.com/huggingface/transformers.git pip install -q ./transformers ``` ```python from transformers import AutoTokenizer, T5ForConditionalGeneration ckpt = 'Narrativa/byt5-base-finetuned-tweet-qa' tokenizer = AutoTokenizer.from_pretrained(ckpt) model = T5ForConditionalGeneration.from_pretrained(ckpt).to('cuda') def get_answer(question, context): input_text = 'question: %s context: %s' % (question, context) inputs = tokenizer([input_text], return_tensors='pt') input_ids = inputs.input_ids.to('cuda') attention_mask = inputs.attention_mask.to('cuda') output = model.generate(input_ids, attention_mask=attention_mask) return tokenizer.decode(output[0], skip_special_tokens=True) context = "MONSTARS BASKETBALL @M0NSTARSBBALLWiggins answers Kemba's floater with a three! game tied 106-106. 8.9 to play. CHA ball!12/4/2016, 2:26:30 AM" question = 'who answered kemba\'s "floater"?' get_answer(question, context) # wiggins ``` Created by: [Narrativa](https://www.narrativa.com/) About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI
{"language": "en", "tags": ["qa", "Question Answering"], "datasets": ["tweet_qa"], "widget": [{"text": "question: how far away was the putt context: GET THE CIGAR READY! Miguel aces the 15th from 174 yards, and celebrates as only he knows how! The European Tour (@EuropeanTour) January, 15 2015"}]}
Narrativa/byt5-base-finetuned-tweet-qa
null
[ "transformers", "pytorch", "t5", "text2text-generation", "qa", "Question Answering", "en", "dataset:tweet_qa", "arxiv:1907.06292", "arxiv:1910.10683", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# ByT5-base fine-tuned for Hate Speech Detection (on Tweets) [ByT5](https://huggingface.co/google/byt5-base) base fine-tuned on [tweets hate speech detection](https://huggingface.co/datasets/tweets_hate_speech_detection) dataset for **Sequence Classification** downstream task. # Details of ByT5 - Base 🧠 ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-base). ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task. ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-base` significantly outperforms [mt5-base](https://huggingface.co/google/mt5-base) on [TweetQA](https://arxiv.org/abs/1907.06292). Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel* ## Details of the downstream task (Sequence Classification as Text generation) - Dataset 📚 [tweets_hate_speech_detection](hhttps://huggingface.co/datasets/tweets_hate_speech_detection) The objective of this task is to detect hate speech in tweets. For the sake of simplicity, we say a tweet contains hate speech if it has a racist or sexist sentiment associated with it. So, the task is to classify racist or sexist tweets from other tweets. Formally, given a training sample of tweets and labels, where label ‘1’ denotes the tweet is racist/sexist and label ‘0’ denotes the tweet is not racist/sexist, your objective is to predict the labels on the given test dataset. - Data Instances: The dataset contains a label denoting is the tweet a hate speech or not ```json {'label': 0, # not a hate speech 'tweet': ' @user when a father is dysfunctional and is so selfish he drags his kids into his dysfunction. #run'} ``` - Data Fields: **label**: 1 - it is a hate speech, 0 - not a hate speech **tweet**: content of the tweet as a string - Data Splits: The data contains training data with **31962** entries ## Test set metrics 🧾 We created a representative test set with the 5% of the entries. The dataset is so imbalanced and we got a **F1 score of 79.8** ## Model in Action 🚀 ```sh git clone https://github.com/huggingface/transformers.git pip install -q ./transformers ``` ```python from transformers import AutoTokenizer, T5ForConditionalGeneration ckpt = 'Narrativa/byt5-base-tweet-hate-detection' tokenizer = AutoTokenizer.from_pretrained(ckpt) model = T5ForConditionalGeneration.from_pretrained(ckpt).to("cuda") def classify_tweet(tweet): inputs = tokenizer([tweet], padding='max_length', truncation=True, max_length=512, return_tensors='pt') input_ids = inputs.input_ids.to('cuda') attention_mask = inputs.attention_mask.to('cuda') output = model.generate(input_ids, attention_mask=attention_mask) return tokenizer.decode(output[0], skip_special_tokens=True) classify_tweet('here goes your tweet...') ``` Created by: [Narrativa](https://www.narrativa.com/) About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI
{"language": "en", "tags": ["hate", "speech"], "datasets": ["tweets_hate_speech_detection"], "widget": [{"text": "@user black lives really matter?"}]}
Narrativa/byt5-base-tweet-hate-detection
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "hate", "speech", "en", "dataset:tweets_hate_speech_detection", "arxiv:1907.06292", "arxiv:1910.10683", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilRoberta-stereotype This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0651 - Accuracy: 0.9892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0783 | 1.0 | 5615 | 0.0703 | 0.9847 | | 0.0468 | 2.0 | 11230 | 0.0573 | 0.9863 | | 0.0316 | 3.0 | 16845 | 0.0580 | 0.9882 | | 0.0172 | 4.0 | 22460 | 0.0591 | 0.9885 | | 0.0098 | 5.0 | 28075 | 0.0651 | 0.9892 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3 Created by: [Narrativa](https://www.narrativa.com/) About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI
{"license": "apache-2.0", "tags": ["generated_from_trainer", "stereotype", "gender", "gender_bias"], "metrics": ["accuracy"], "widget": [{"text": "Cauterize is not just for fans of the guitarist or his other projects, but those that love music that is both aggressive and infectious and gave the album 4 out of 5 stars ."}]}
Narrativa/distilroberta-finetuned-stereotype-detection
null
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "stereotype", "gender", "gender_bias", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# mT5-base fine-tuned on TyDiQA for multilingual Question Generation 🗺📖❓ [Google's mT5-base](https://huggingface.co/google/mt5-base) fine-tuned on [TyDi QA](https://huggingface.co/nlp/viewer/?dataset=tydiqa&config=secondary_task) (secondary task) for **multingual Question Generation** downstream task (by answer prepending). ## Details of mT5 [Google's mT5](https://github.com/google-research/multilingual-t5) mT5 is pretrained on the [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 101 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu. **Note**: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) Other Community Checkpoints: [here](https://huggingface.co/models?search=mt5) Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel* ## Details of the dataset 📚 **TyDi QA** is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs. The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language expresses -- such that we expect models performing well on this set to generalize across a large number of the languages in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without the use of translation (unlike MLQA and XQuAD). | Dataset | Task | Split | # samples | | -------- | ----- |------| --------- | | TyDi QA | GoldP | train| 49881 | | TyDi QA | GoldP | valid| 5077 | ## Results on validation dataset 📝 ### WIP ## Model in Action 🚀 ### WIP Created by: [Narrativa](https://www.narrativa.com/) About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI
{"language": "multilingual", "datasets": ["tydiqa"], "widget": [{"text": "answer: monitoring and managing PR strategy including relations with the media and journalists context: Sof\u00eda has a degree in Communications and public relations agency experience where she was in charge of monitoring and managing PR strategy including relations with the media and journalists."}]}
Narrativa/mT5-base-finetuned-tydiQA-question-generation
null
[ "transformers", "pytorch", "mt5", "text2text-generation", "multilingual", "dataset:tydiqa", "arxiv:2010.11934", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# mT5-base fine-tuned on TyDiQA for multilingual QA 🗺📖❓ [Google's mT5-base](https://huggingface.co/google/mt5-base) fine-tuned on [TyDi QA](https://huggingface.co/nlp/viewer/?dataset=tydiqa&config=secondary_task) (secondary task) for **multingual Q&A** downstream task. ## Details of mT5 [Google's mT5](https://github.com/google-research/multilingual-t5) mT5 is pretrained on the [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 101 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu. **Note**: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) Other Community Checkpoints: [here](https://huggingface.co/models?search=mt5) Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel* ## Details of the dataset 📚 **TyDi QA** is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs. The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language expresses -- such that we expect models performing well on this set to generalize across a large number of the languages in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without the use of translation (unlike MLQA and XQuAD). | Dataset | Task | Split | # samples | | -------- | ----- |------| --------- | | TyDi QA | GoldP | train| 49881 | | TyDi QA | GoldP | valid| 5077 | ## Results on validation dataset 📝 | Metric | # Value | | ------ | --------- | | **EM** | **60.88** | ## Model in Action 🚀 ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') tokenizer = AutoTokenizer.from_pretrained("Narrativa/mT5-base-finetuned-tydiQA-xqa") model = AutoModelForCausalLM.from_pretrained("Narrativa/mT5-base-finetuned-tydiQA-xqa").to(device) def get_response(question, context, max_length=32): input_text = 'question: %s context: %s' % (question, context) features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'].to(device), attention_mask=features['attention_mask'].to(device), max_length=max_length) return tokenizer.decode(output[0]) # Some examples in different languages context = 'HuggingFace won the best Demo paper at EMNLP2020.' question = 'What won HuggingFace?' get_response(question, context) context = 'HuggingFace ganó la mejor demostración con su paper en la EMNLP2020.' question = 'Qué ganó HuggingFace?' get_response(question, context) context = 'HuggingFace выиграл лучшую демонстрационную работу на EMNLP2020.' question = 'Что победило в HuggingFace?' get_response(question, context) ``` Created by: [Narrativa](https://www.narrativa.com/) About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI
{"language": "multilingual", "datasets": ["tydiqa"], "widget": [{"text": "question: what does she do? context: Sof\u00eda has a degree in Communications and public relations agency experience where she was in charge of monitoring and managing PR strategy including relations with the media and journalists."}]}
Narrativa/mT5-base-finetuned-tydiQA-xqa
null
[ "transformers", "pytorch", "tensorboard", "mt5", "text2text-generation", "multilingual", "dataset:tydiqa", "arxiv:2010.11934", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
# mBART-large-50 fine-tuned onpus100 and opusbook for English to Portuguese translation. [mBART-50](https://huggingface.co/facebook/mbart-large-50/) large fine-tuned on [opus100](https://huggingface.co/datasets/viewer/?dataset=opus100) dataset for **NMT** downstream task. # Details of mBART-50 🧠 mBART-50 is a multilingual Sequence-to-Sequence model pre-trained using the "Multilingual Denoising Pretraining" objective. It was introduced in [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) paper. mBART-50 is a multilingual Sequence-to-Sequence model. It was created to show that multilingual translation models can be created through multilingual fine-tuning. Instead of fine-tuning on one direction, a pre-trained model is fine-tuned many directions simultaneously. mBART-50 is created using the original mBART model and extended to add extra 25 languages to support multilingual machine translation models of 50 languages. The pre-training objective is explained below. **Multilingual Denoising Pretraining**: The model incorporates N languages by concatenating data: `D = {D1, ..., DN }` where each Di is a collection of monolingual documents in language `i`. The source documents are noised using two schemes, first randomly shuffling the original sentences' order, and second a novel in-filling scheme, where spans of text are replaced with a single mask token. The model is then tasked to reconstruct the original text. 35% of each instance's words are masked by random sampling a span length according to a Poisson distribution `(λ = 3.5)`. The decoder input is the original text with one position offset. A language id symbol `LID` is used as the initial token to predict the sentence. ## Details of the downstream task (NMT) - Dataset 📚 - **Homepage:** [Link](http://opus.nlpl.eu/opus-100.php) - **Repository:** [GitHub](https://github.com/EdinburghNLP/opus-100-corpus) - **Paper:** [ARXIV](https://arxiv.org/abs/2004.11867) ### Dataset Summary OPUS-100 is English-centric, meaning that all training pairs include English on either the source or target side. The corpus covers 100 languages (including English). Languages were selected based on the volume of parallel data available in OPUS. ### Languages OPUS-100 contains approximately 55M sentence pairs. Of the 99 language pairs, 44 have 1M sentence pairs of training data, 73 have at least 100k, and 95 have at least 10k. ## Dataset Structure ### Data Fields - `src_tag`: `string` text in source language - `tgt_tag`: `string` translation of source language in target language ### Data Splits The dataset is split into training, development, and test portions. Data was prepared by randomly sampled up to 1M sentence pairs per language pair for training and up to 2000 each for development and test. To ensure that there was no overlap (at the monolingual sentence level) between the training and development/test data, they applied a filter during sampling to exclude sentences that had already been sampled. Note that this was done cross-lingually so that, for instance, an English sentence in the Portuguese-English portion of the training data could not occur in the Hindi-English test set. ## Test set metrics 🧾 We got a **BLEU score of 20.61** ## Model in Action 🚀 ```sh git clone https://github.com/huggingface/transformers.git pip install -q ./transformers ``` ```python from transformers import MBart50TokenizerFast, MBartForConditionalGeneration ckpt = 'Narrativa/mbart-large-50-finetuned-opus-en-pt-translation' tokenizer = MBart50TokenizerFast.from_pretrained(ckpt) model = MBartForConditionalGeneration.from_pretrained(ckpt).to("cuda") tokenizer.src_lang = 'en_XX' def translate(text): inputs = tokenizer(text, return_tensors='pt') input_ids = inputs.input_ids.to('cuda') attention_mask = inputs.attention_mask.to('cuda') output = model.generate(input_ids, attention_mask=attention_mask, forced_bos_token_id=tokenizer.lang_code_to_id['pt_XX']) return tokenizer.decode(output[0], skip_special_tokens=True) translate('here your English text to be translated to Portuguese...') ``` Created by: [Narrativa](https://www.narrativa.com/) About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI
{"language": ["en", "pt"], "tags": ["translation"], "datasets": ["opus100", "opusbook"], "metrics": ["bleu"]}
Narrativa/mbart-large-50-finetuned-opus-en-pt-translation
null
[ "transformers", "pytorch", "mbart", "text2text-generation", "translation", "en", "pt", "dataset:opus100", "dataset:opusbook", "arxiv:2008.00401", "arxiv:2004.11867", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
# mBART-large-50 fine-tuned onpus100 and opusbook for Portuguese to English translation. [mBART-50](https://huggingface.co/facebook/mbart-large-50/) large fine-tuned on [opus100](https://huggingface.co/datasets/viewer/?dataset=opus100) dataset for **NMT** downstream task. # Details of mBART-50 🧠 mBART-50 is a multilingual Sequence-to-Sequence model pre-trained using the "Multilingual Denoising Pretraining" objective. It was introduced in [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) paper. mBART-50 is a multilingual Sequence-to-Sequence model. It was created to show that multilingual translation models can be created through multilingual fine-tuning. Instead of fine-tuning on one direction, a pre-trained model is fine-tuned many directions simultaneously. mBART-50 is created using the original mBART model and extended to add extra 25 languages to support multilingual machine translation models of 50 languages. The pre-training objective is explained below. **Multilingual Denoising Pretraining**: The model incorporates N languages by concatenating data: `D = {D1, ..., DN }` where each Di is a collection of monolingual documents in language `i`. The source documents are noised using two schemes, first randomly shuffling the original sentences' order, and second a novel in-filling scheme, where spans of text are replaced with a single mask token. The model is then tasked to reconstruct the original text. 35% of each instance's words are masked by random sampling a span length according to a Poisson distribution `(λ = 3.5)`. The decoder input is the original text with one position offset. A language id symbol `LID` is used as the initial token to predict the sentence. ## Details of the downstream task (NMT) - Dataset 📚 - **Homepage:** [Link](http://opus.nlpl.eu/opus-100.php) - **Repository:** [GitHub](https://github.com/EdinburghNLP/opus-100-corpus) - **Paper:** [ARXIV](https://arxiv.org/abs/2004.11867) ### Dataset Summary OPUS-100 is English-centric, meaning that all training pairs include English on either the source or target side. The corpus covers 100 languages (including English). Languages were selected based on the volume of parallel data available in OPUS. ### Languages OPUS-100 contains approximately 55M sentence pairs. Of the 99 language pairs, 44 have 1M sentence pairs of training data, 73 have at least 100k, and 95 have at least 10k. ## Dataset Structure ### Data Fields - `src_tag`: `string` text in source language - `tgt_tag`: `string` translation of source language in target language ### Data Splits The dataset is split into training, development, and test portions. Data was prepared by randomly sampled up to 1M sentence pairs per language pair for training and up to 2000 each for development and test. To ensure that there was no overlap (at the monolingual sentence level) between the training and development/test data, they applied a filter during sampling to exclude sentences that had already been sampled. Note that this was done cross-lingually so that, for instance, an English sentence in the Portuguese-English portion of the training data could not occur in the Hindi-English test set. ## Test set metrics 🧾 We got a **BLEU score of 26.12** ## Model in Action 🚀 ```sh git clone https://github.com/huggingface/transformers.git pip install -q ./transformers ``` ```python from transformers import MBart50TokenizerFast, MBartForConditionalGeneration ckpt = 'Narrativa/mbart-large-50-finetuned-opus-pt-en-translation' tokenizer = MBart50TokenizerFast.from_pretrained(ckpt) model = MBartForConditionalGeneration.from_pretrained(ckpt).to("cuda") tokenizer.src_lang = 'pt_XX' def translate(text): inputs = tokenizer(text, return_tensors='pt') input_ids = inputs.input_ids.to('cuda') attention_mask = inputs.attention_mask.to('cuda') output = model.generate(input_ids, attention_mask=attention_mask, forced_bos_token_id=tokenizer.lang_code_to_id['en_XX']) return tokenizer.decode(output[0], skip_special_tokens=True) translate('here your Portuguese text to be translated to English...') ``` Created by: [Narrativa](https://www.narrativa.com/) About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI
{"language": ["pt", "en"], "tags": ["translation"], "datasets": ["opus100", "opusbook"], "metrics": ["bleu"]}
Narrativa/mbart-large-50-finetuned-opus-pt-en-translation
null
[ "transformers", "pytorch", "mbart", "text2text-generation", "translation", "pt", "en", "dataset:opus100", "dataset:opusbook", "arxiv:2008.00401", "arxiv:2004.11867", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Spanish GPT-2 trained on [Spanish RAP Lyrics](https://www.kaggle.com/smunoz3801/9325-letras-de-rap-en-espaol) Created by: [Narrativa](https://www.narrativa.com/) About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI
{"language": "es", "license": "mit", "tags": ["GPT-2", "Rap", "Lyrics", "Songs"], "datasets": ["large_spanish_corpus"], "widget": [{"text": "D\u00e9jame contarte lo importante que es buscarte un plan\nNo para golpearles o ganarles, sino para darles paz\n"}]}
Narrativa/spanish-gpt2-finetuned-rap-lyrics
null
[ "transformers", "pytorch", "gpt2", "text-generation", "GPT-2", "Rap", "Lyrics", "Songs", "es", "dataset:large_spanish_corpus", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
Narrativa/t5-base-finetuned-totto-table-to-text
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DeBERTa v3 small fine-tuned on hate_speech18 dataset for Hate Speech Detection This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the hate_speech18 dataset. It achieves the following results on the evaluation set: - Loss: 0.2922 - Accuracy: 0.9161 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4147 | 1.0 | 650 | 0.3910 | 0.8832 | | 0.2975 | 2.0 | 1300 | 0.2922 | 0.9161 | | 0.2575 | 3.0 | 1950 | 0.3555 | 0.9051 | | 0.1553 | 4.0 | 2600 | 0.4263 | 0.9124 | | 0.1267 | 5.0 | 3250 | 0.4238 | 0.9161 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["hate_speech18"], "metrics": ["accuracy"], "widget": [{"text": "ok, so do we need to kill them too or are the slavs okay ? for some reason whenever i hear the word slav , the word slobber comes to mind and i picture a slobbering half breed creature like the humpback of notre dame or Igor haha"}], "model-index": [{"name": "deberta-v3-small-hate-speech", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "hate_speech18", "type": "hate_speech18", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.916058394160584, "name": "Accuracy"}]}]}]}
Narrativaai/deberta-v3-small-finetuned-hate_speech18
null
[ "transformers", "pytorch", "tensorboard", "deberta-v2", "text-classification", "generated_from_trainer", "dataset:hate_speech18", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # RoBERTa-large-fake-news-detection-spanish This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) on an [Spanish Fake News Dataset](https://sites.google.com/view/iberlef2020/#h.p_w0c31bn0r-SW). It achieves the following results on the evaluation set: - Loss: 1.7474 - F1: **0.7717** - Accuracy: 0.7797 > So, based on the [leaderboard](https://sites.google.com/view/fakedes/results?authuser=0) our model **outperforms** the best model (scores F1 = 0.7666). ## Model description RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the RoBERTa large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. ## Intended uses & limitations The objective of this task is to decide if a news item is fake or real by analyzing its textual representation. ## Training and evaluation data **FakeDeS**: [Fake News Detection in Spanish Shared Task](https://sites.google.com/view/fakedes/home) Fake news provides information that aims to manipulate people for different purposes: terrorism, political elections, advertisement, satire, among others. In social networks, misinformation extends in seconds among thousands of people, so it is necessary to develop tools that help control the amount of false information on the web. Similar tasks are detection of popularity in social networks and detection of subjectivity of messages in this media. A fake news detection system aims to help users detect and filter out potentially deceptive news. The prediction of intentionally misleading news is based on the analysis of truthful and fraudulent previously reviewed news, i.e., annotated corpora. The Spanish Fake News Corpus is a collection of news compiled from several web sources: established newspapers websites,media companies websites, special websites dedicated to validating fake news, websites designated by different journalists as sites that regularly publish fake news. The news were collected from January to July of 2018 and all of them were written in Mexican Spanish. The corpus has 971 news collected from January to July, 2018, from different sources: - Established newspapers websites, - Media companies websites, - Special websites dedicated to validating fake news, - Websites designated by different journalists as sites that regularly publish fake news. The corpus was tagged considering only two classes (true or fake), following a manual labeling process: - A news is true if there is evidence that it has been published in reliable sites. - A news is fake if there is news from reliable sites or specialized website in detection of deceptive content that contradicts it or no other evidence was found about the news besides the source. - We collected the true-fake news pair of an event so there is a correlation of news in the corpus. In order to avoid topic bias, the corpus covers news from 9 different topics: Science, Sport, Economy, Education, Entertainment, Politics, Health, Security, and Society. As it can be seen in the table below, the number of fake and true news is quite balanced. Approximately 70% will be used as training corpus (676 news), and the 30% as testing corpus (295 news). The training corpus contains the following information: - Category: Fake/ True - Topic: Science/ Sport/ Economy/ Education/ Entertainment/ Politics, Health/ Security/ Society - Headline: The title of the news. - Text: The complete text of the news. - Link: The URL where the news was published. More information needed ## Training procedure TBA ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:--------:| | No log | 1.0 | 243 | 0.6282 | 0.7513 | 0.75 | | No log | 2.0 | 486 | 0.9600 | 0.7346 | 0.7587 | | 0.5099 | 3.0 | 729 | 1.2128 | 0.7656 | 0.7570 | | 0.5099 | 4.0 | 972 | 1.4001 | 0.7606 | 0.7622 | | 0.1949 | 5.0 | 1215 | 1.9748 | 0.6475 | 0.7220 | | 0.1949 | 6.0 | 1458 | 1.7386 | 0.7706 | 0.7710 | | 0.0263 | 7.0 | 1701 | 1.7474 | 0.7717 | 0.7797 | | 0.0263 | 8.0 | 1944 | 1.8114 | 0.7695 | 0.7780 | | 0.0046 | 9.0 | 2187 | 1.8444 | 0.7709 | 0.7797 | | 0.0046 | 10.0 | 2430 | 1.8552 | 0.7709 | 0.7797 | ### Fast usage with HF `pipelines` ```python from transformers import pipeline ckpt = "Narrativaai/fake-news-detection-spanish" classifier = pipeline("text-classification", model=ckpt) headline = "Your headline" text = "Your article text here..." classifier(headline + " [SEP] " + text) ``` ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3 Created by: [Narrativa](https://www.narrativa.com/) About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI
{"language": "es", "tags": ["generated_from_trainer", "fake", "news", "competition"], "datasets": ["fakedes"], "metrics": ["f1", "accuracy"], "widget": [{"text": "La palabra \"haiga\", aceptada por la RAE [SEP] La palabra \"haiga\", aceptada por la RAE La Real Academia de la Lengua (RAE), ha aceptado el uso de \"HAIGA\", para su utilizaci\u00f3n en las tres personas del singular del presente del subjuntivo del verbo hacer, aunque asegura que la forma m\u00e1s recomendable en la lengua culta para este tiempo, sigue siendo \"haya\". As\u00ed lo han confirmado fuentes de la RAE, que explican que este cambio ha sido propuesto y aprobado por el pleno de la Academia de la Lengua, tras la extendida utilizaci\u00f3n por todo el territorio nacional, sobre todo, empleado por personas carentes de estudios o con estudios b\u00e1sicos de graduado escolar. Ya no ser\u00e1 objeto de burla ese compa\u00f1ero que a diario repite aquello de \"Mientras que haiga faena, no podemos quejarnos\" o esa abuela que repite aquello de \"El que haiga sacao los juguetes, que los recoja\". Entre otras palabras novedosas que ha aceptado la RAE, contamos tambi\u00e9n con \"Descambiar\", significa deshacer un cambio, por ejemplo \"devolver la compra\". Visto lo visto, nadie apostar\u00eda que la palabra \"follamigos\" sea la siguiente de la lista."}], "model-index": [{"name": "roberta-large-fake-news-detection-spanish", "results": []}]}
Narrativaai/fake-news-detection-spanish
null
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "fake", "news", "competition", "es", "dataset:fakedes", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-mlm This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6481 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "test-mlm", "results": []}]}
Narshion/bert-base-multilingual-cased-mwach
null
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-multilingual-cased-urgency This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/) on the mWACH NEO dataset. It achieves the following results on the evaluation set: - Loss: 2.2797 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 4.1408 | 1.0 | 5659 | 3.6705 | | 2.8777 | 2.0 | 11318 | 2.5536 | | 2.561 | 3.0 | 16977 | 2.2740 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": []}
Narshion/bert-base-multilingual-cased-urgency
null
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
zero-shot-classification
transformers
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data. Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates. This is the DeBERTa large model fine-tuned with MNLI task. #### Fine-tuning on NLU tasks We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks. | Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B | |---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------| | | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S | | BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- | | RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- | | XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- | | [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 | | [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7| | [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9| |**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** | -------- #### Notes. - <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks. - <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, you need to specify **--sharded_ddp** ```bash cd transformers/examples/text-classification/ export TASK_NAME=mrpc python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\\n--task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 4 \\\n--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16 ``` ### Citation If you find DeBERTa useful for your work, please cite the following paper: ``` latex @inproceedings{ he2021deberta, title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION}, author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=XPZIaotutsD} } ```
{"language": "en", "license": "mit", "tags": ["deberta-v1", "deberta-mnli"], "tasks": "mnli", "thumbnail": "https://huggingface.co/front/thumbnails/microsoft.png", "pipeline_tag": "zero-shot-classification"}
Narsil/deberta-large-mnli-zero-cls
null
[ "transformers", "pytorch", "deberta", "text-classification", "deberta-v1", "deberta-mnli", "zero-shot-classification", "en", "arxiv:2006.03654", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Narsil/esm1b_t33_650M_UR50S
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
{}
Narsil/fr_pretrained
null
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# GPT-2 Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-language-models/). Disclaimer: The team releasing GPT-2 also wrote a [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. ## Model description GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ## Intended uses & limitations You can use the raw model for text generation or fine-tune it to a downstream task. See the [model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you. ### How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2') >>> set_seed(42) >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) [{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."}, {'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"}, {'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"}, {'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"}, {'generated_text': 'Hello, I\\'m a language model, not a language model"\ \ The concept of "no-tricks" comes in handy later with new'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2Model.from_pretrained('gpt2') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = TFGPT2Model.from_pretrained('gpt2') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases): > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases > that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do > not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a > study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, > and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar > levels of caution around use cases that are sensitive to biases around human attributes. Here's an example of how the model can have biased predictions: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2') >>> set_seed(42) >>> generator("The White man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The White man worked as a mannequin for'}, {'generated_text': 'The White man worked as a maniser of the'}, {'generated_text': 'The White man worked as a bus conductor by day'}, {'generated_text': 'The White man worked as a plumber at the'}, {'generated_text': 'The White man worked as a journalist. He had'}] >>> set_seed(42) >>> generator("The Black man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The Black man worked as a man at a restaurant'}, {'generated_text': 'The Black man worked as a car salesman in a'}, {'generated_text': 'The Black man worked as a police sergeant at the'}, {'generated_text': 'The Black man worked as a man-eating monster'}, {'generated_text': 'The Black man worked as a slave, and was'}] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText [here](https://github.com/openai/gpt-2/blob/master/domains.txt). ## Training procedure ### Preprocessing The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact details of training. ## Evaluation results The model achieves the following results without any fine-tuning (zero-shot): | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW | |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:| | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) | | | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 | ### BibTeX entry and citation info ```bibtex @article{radford2019language, title={Language Models are Unsupervised Multitask Learners}, author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya}, year={2019} } ``` <a href="https://huggingface.co/exbert/?model=gpt2"> \t<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
{"language": "en", "license": "mit", "tags": ["exbert"], "pipeline_tag": "text-generation"}
Narsil/gpt2
null
[ "transformers", "pytorch", "tf", "jax", "tflite", "rust", "safetensors", "gpt2", "text-generation", "exbert", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
image-segmentation
generic
## Keras semantic segmentation models on the 🤗Hub! 🐶 🐕 🐩 Image classification task tells us about a class assigned to an image, and object detection task creates a boundary box on an object in an image. But what if we want to know about the shape of the image? Segmentation models helps us segment images and reveal their shapes. It has many variants. You can host your Keras segmentation models on the Hub. Semantic segmentation models classify pixels, meaning, they assign a class (can be cat or dog) to each pixel. The output of a model looks like following. ![Raw Output](./raw_output.jpg) We need to get the best prediction for every pixel. ![Mask](./mask.jpg) This is still not readable. We have to convert this into different binary masks for each class and convert to a readable format by converting each mask into base64. We will return a list of dicts, and for each dictionary, we have the label itself, the base64 code and a score (semantic segmentation models don't return a score, so we have to return 1.0 for this case). You can find the full implementation in ```pipeline.py```. ![Binary Mask](./binary_mask.jpg) Now that you know the expected output by the model, you can host your Keras segmentation models (and other semantic segmentation models) in the similar fashion. Try it yourself and host your segmentation models! ![Segmented Cat](./hircin_the_cat.png)
{"license": "apache-2.0", "library_name": "generic", "tags": ["image-segmentation", "generic"], "pipeline_tag": "image-segmentation", "dataset": ["oxfort-iit pets"]}
Narsil/pet-segmentation
null
[ "generic", "tf", "image-segmentation", "license:apache-2.0", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
{}
Narsil/pretrained
null
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
Narsil/pretrained2
null
[ "transformers", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
Small change. again. again ? again.
{}
Narsil/small
null
[ "transformers", "tf", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
Small change. again. again ? again.
{}
Narsil/small2
null
[ "transformers", "pytorch", "tf", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
```python import tempfile from tokenizers import Tokenizer, models, processors from transformers.tokenization_utils_fast import PreTrainedTokenizerFast vocab = [(chr(i), i) for i in range(256)] tokenizer = Tokenizer(models.Unigram(vocab)) tokenizer.add_special_tokens(["<bos>", "<eos>"]) tokenizer.post_processor = processors.TemplateProcessing( single="<bos> $0 <eos>", special_tokens=[("<bos>", 256), ("<eos>", 257)] ) with tempfile.NamedTemporaryFile() as f: tokenizer.save(f.name) real_tokenizer = PreTrainedTokenizerFast(tokenizer_file=f.name, eos_token="<eos>", bos_token="<bos>") real_tokenizer._tokenizer.save("dummy.json") ``` Small change.
{}
Narsil/small_conversational_test
null
[ "transformers", "albert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
```python import tempfile from tokenizers import Tokenizer, models from transformers import PreTrainedTokenizerFast model_max_length = 4 vocab = [(chr(i), i) for i in range(256)] tokenizer = Tokenizer(models.Unigram(vocab)) with tempfile.NamedTemporaryFile() as f: tokenizer.save(f.name) real_tokenizer = PreTrainedTokenizerFast(tokenizer_file=f.name, model_max_length=model_max_length) real_tokenizer._tokenizer.save("dummy/tokenizer.json") ``` config uses Albert which works with a minimal `config.json`
{}
Narsil/small_summarization_test
null
[ "transformers", "albert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
Narsil/tiny-distilbert-sequence-classification
null
[ "transformers", "pytorch", "tf", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
{}
Narsil/tiny-distilbert
null
[ "transformers", "pytorch", "tf", "distilbert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Nashwan/mybot
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Nason/beit-base-patch16-224-pt22k-ft22k
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Nastya/distilbert-base-uncased-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Nastya/wav2vec2-base-finetuned-ks
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
Говорили: "Погоди", уходил с дождём Эта ночь нужна, переваривал сон Вы порвали паруса, ожидая восторг Это мой Тачтаун, это мой Гонконг Надо созерцать, и не более того Либо до конца переполох Хитроматы пустот, наливай по сто Забывай мой голос и меня самого Забывай мой рай, я пропитый бадман Добровольно приговаривал, а вам по делом Заливал до дна, дабы дать по щам Не хочу себя жалеть, и не буду прощать Этот мир не смог меня сохранить Потеряли головы, теряя нить Во время дабы любить без обид и жить Не забыть нам бед, и незачем творить Ночи в одного, ночи в одного Холили, лелеяли убитого меня собой Ночи в одного, ночи в одного Верили в меня, как никогда, никто и ни в кого Ночи в одного, ночи в одного Холили, лелеяли убитого меня собой Ночи в одного, ночи в одного Верили в меня, как никогда, никто и ни в кого
{}
Nasvai1702/Night
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Natej815/BookProject
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Natej815/EleutherBooks
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Nath/Bla
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
NathNyedjan/kkkkk
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
Test for use in Google Colab :'(
{}
NathanZhu/GabHateCorpusTrained
null
[ "transformers", "pytorch", "jax", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
Naturealbe/DialoGPT-small-harrypotter-2
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
Naturealbe/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Naturealbe/DialoGPT-small-tonystark
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
# Welcome to KanBERTo (ಕನ್ಬರ್ಟೋ) ## Model Description > This is a small language model for [Kannada](https://en.wikipedia.org/wiki/Kannada) language with 1M data samples taken from [OSCAR page](https://traces1.inria.fr/oscar/files/compressed-orig/kn.txt.gz) ## Training params - **Dataset** - 1M data samples are used to train this model from OSCAR page(https://traces1.inria.fr/oscar/) eventhough data set is of 1.7 GB due to resource constraint to train I have picked only 1M data from the total 1.7GB data set. If you are interested in collaboration and have computational resources to train on you are most welcome to do so. - **Preprocessing** - ByteLevelBPETokenizer is used to tokenize the sentences at character level and vocabulary size is set to 52k as per standard values given by 🤗 - **Hyperparameters** - __ByteLevelBPETokenizer__ : vocabulary size = 52_000 and min_frequency = 2 __Trainer__ : num_train_epochs=12 - trained for 12 epochs per_gpu_train_batch_size=64 - batch size for the datasamples is 64 save_steps=10_000 - save model for every 10k steps save_total_limit=2 - save limit is set for 2 **Intended uses & limitations** this is for anyone who wants to make use of kannada language models for various tasks like language generation, translation and many more use cases. **Whatever else is helpful!** If you are intersted in collaboration feel free to reach me [Naveen](mailto:[email protected])
{"language": "kn"}
Naveen-k/KanBERTo
null
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "kn", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Navid/AdvBERTRanker
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Navigator/DialgoGPT-medium-marty
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
#Marty McFly model
{"tags": ["conversational"]}
Navigator/DialoGPT-medium-martymcfly
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Navigator/dodydard-marty
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Navigator/dodydard
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Chandler Bing DialoGPT Model
{"tags": ["conversational"]}
Navya2608/DialoGPT-medium-chandler
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Rachel Green DialoGPT Model
{"tags": ["conversational"]}
Navya2608/DialoGPT-medium-rachel
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Navya2608/DialoGPT-small-tonystark
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Tony Stark dialoGPT model
{"tags": ["conversational"]}
Navya2608/DialoGPT-small-tonystarkscript
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Nawal/bert_blood
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Nay/Nay
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
null
# Norwegian Wav2Vec2 Model - 1B - Bokmål This achieves the following results on the test set with a 5-gram KenLM: - WER: 0.0668 - CER: 0.0256 Without using a language model, we are getting these results: - WER: ??? - CER: ??? ## Model description This is one of several Wav2Vec-models created during the 🤗 hosted [Robust Speech Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614?s=09). In parallell with the event, the team also converted the [Norwegian Parliamentary Speech Corpus (NPSC)](https://huggingface.co/datasets/NbAiLab/NPSC) to the 🤗 Dataset format and used that as the main source for training. We do release all code developed during the event so that the Norwegian NLP community can build upon this to develop even better Norwegian ASR models. The finetuning of these models are not very compute demanding. You should after following the instructions here, be able to train your own automatic speech recognition system in less than a day with an average GPU. ## Team The following people contributed to building this model: Rolv-Arild Braaten, Per Egil Kummervold, Andre Kåsen, Javier de la Rosa, Per Erik Solberg, and Freddy Wetjen. ## Training procedure To reproduce these results, we strongly recommend that you follow the [instructions from HuggingFace](https://github.com/huggingface/transformers/tree/master/examples/research_projects/robust-speech-event#talks) to train a simple Swedish model. When you have verified that you are able to do this, create a fresh new repo. You can then start by copying the files ```run.sh``` and ```run_speech_recognition_ctc.py``` from our repo. Running this will create all the other necessary files, and should let you reproduce our results. With some tweaks to the hyperparameters, you might even be able to build an even better ASR. Good luck! ### Language Model As you see from the results above, adding even a simple 5-gram language will significantly improve the results. 🤗 has provided another [very nice blog](https://huggingface.co/blog/wav2vec2-with-ngram) about how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC). You can also skip some of the steps in the guide, and copy the [5-gram model from this repo](https://huggingface.co/NbAiLab/XLSR-300M-bokmaal/tree/main/language_model). ### Parameters The following parameters were used during training: ``` --dataset_name="NbAiLab/NPSC" --model_name_or_path="facebook/wav2vec2-xls-r-1b" --dataset_config_name="16K_mp3_bokmaal" --output_dir="./" --overwrite_output_dir --num_train_epochs="40" --per_device_train_batch_size="12" --per_device_eval_batch_size="12" --gradient_accumulation_steps="2" --learning_rate="2e-5" --warmup_steps="2000" --length_column_name="input_length" --evaluation_strategy="steps" --text_column_name="text" --save_steps="500" --eval_steps="500" --logging_steps="100" --layerdrop="0.041" --attention_dropout="0.094" --activation_dropout="0.055" --hidden_dropout="0.047" --save_total_limit="3" --freeze_feature_encoder --feat_proj_dropout="0.04" --mask_time_prob="0.082" --mask_time_length="10" --mask_feature_prob="0.25" --mask_feature_length="64" --gradient_checkpointing --min_duration_in_seconds="0.5" --max_duration_in_seconds="30.0" --ctc_zero_infinity=True --use_auth_token --seed="42" --fp16 --group_by_length --do_train --do_eval --push_to_hub --preprocessing_num_workers="16" ``` Following this settings, the training might take 3-4 days on an average GPU. You should however get a decent model and faster results by tweaking these parameters | Parameter| Comment | |:-------------|:-----| | per_device_train_batch_size | Adjust this to the maximum of available memory. 16 or 24 might be good settings depending on your system | |gradient_accumulation_steps |Can be adjusted even further up to increase batch size and speed up training without running into memory issues | | learning_rate|Can be increased, maybe as high as 1e-4. Speeds up training but might add instability | | epochs| Can be decreased significantly. This is a huge dataset and you might get a decent result already after a couple of epochs|
{"language": ["nb-NO"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "NbAiLab/NPSC", "xxx-robust-speech-event", false, "nb-NO"], "datasets": ["NbAiLab/NPSC"], "model-index": [{"name": "wav2vec2-xls-r-1b-npsc-bokmaal-low-27k", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "NPSC", "type": "NbAiLab/NPSC", "args": "16K_mp3_bokmaal"}, "metrics": [{"type": "wer", "value": 0.06686424124625939, "name": "Test (Bokm\u00e5l) WER"}, {"type": "cer", "value": 0.025697763468940576, "name": "Test (Bokm\u00e5l) CER"}]}]}]}
NbAiLab/Wav2Vec-Template
null
[ "automatic-speech-recognition", "NbAiLab/NPSC", "xxx-robust-speech-event", "no", "nb-NO", "dataset:NbAiLab/NPSC", "license:apache-2.0", "model-index", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # XLSR-1B-bokmaal-low This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1579 - Wer: 0.0722 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.7e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 34.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.434 | 0.24 | 500 | 0.1704 | 0.1378 | | 0.2833 | 0.48 | 1000 | 0.1638 | 0.1324 | | 0.2478 | 0.72 | 1500 | 0.1606 | 0.1240 | | 0.2276 | 0.97 | 2000 | 0.1562 | 0.1212 | | 0.2208 | 1.21 | 2500 | 0.1576 | 0.1172 | | 0.2148 | 1.45 | 3000 | 0.1502 | 0.1119 | | 0.1994 | 1.69 | 3500 | 0.1409 | 0.1110 | | 0.1932 | 1.93 | 4000 | 0.1432 | 0.1112 | | 0.2122 | 2.17 | 4500 | 0.1443 | 0.1098 | | 0.2177 | 2.42 | 5000 | 0.1329 | 0.1102 | | 0.2058 | 2.66 | 5500 | 0.1403 | 0.1070 | | 0.2216 | 2.9 | 6000 | 0.1342 | 0.1067 | | 0.1984 | 3.14 | 6500 | 0.1370 | 0.1030 | | 0.2056 | 3.38 | 7000 | 0.1371 | 0.1041 | | 0.1735 | 3.62 | 7500 | 0.1296 | 0.1003 | | 0.203 | 3.87 | 8000 | 0.1301 | 0.1005 | | 0.1835 | 4.11 | 8500 | 0.1310 | 0.1004 | | 0.178 | 4.35 | 9000 | 0.1300 | 0.0959 | | 0.1585 | 4.59 | 9500 | 0.1277 | 0.0966 | | 0.1848 | 4.83 | 10000 | 0.1260 | 0.0974 | | 0.169 | 5.07 | 10500 | 0.1281 | 0.0969 | | 0.1666 | 5.32 | 11000 | 0.1291 | 0.1003 | | 0.1552 | 5.56 | 11500 | 0.1271 | 0.0959 | | 0.2736 | 5.8 | 12000 | 0.1320 | 0.0935 | | 0.2845 | 6.04 | 12500 | 0.1299 | 0.0921 | | 0.1536 | 6.28 | 13000 | 0.1282 | 0.0927 | | 0.1491 | 6.52 | 13500 | 0.1240 | 0.0906 | | 0.1579 | 6.77 | 14000 | 0.1208 | 0.0921 | | 0.16 | 7.01 | 14500 | 0.1182 | 0.0903 | | 0.1367 | 7.25 | 15000 | 0.1214 | 0.0922 | | 0.1499 | 7.49 | 15500 | 0.1232 | 0.0916 | | 0.148 | 7.73 | 16000 | 0.1184 | 0.0896 | | 0.1426 | 7.97 | 16500 | 0.1201 | 0.0889 | | 0.1471 | 8.22 | 17000 | 0.1256 | 0.0882 | | 0.1358 | 8.46 | 17500 | 0.1265 | 0.0909 | | 0.1245 | 8.7 | 18000 | 0.1263 | 0.0886 | | 0.1407 | 8.94 | 18500 | 0.1226 | 0.0885 | | 0.1289 | 9.18 | 19000 | 0.1315 | 0.0873 | | 0.1326 | 9.42 | 19500 | 0.1233 | 0.0868 | | 0.1305 | 9.67 | 20000 | 0.1237 | 0.0870 | | 0.1432 | 9.91 | 20500 | 0.1234 | 0.0857 | | 0.1205 | 10.15 | 21000 | 0.1303 | 0.0858 | | 0.1248 | 10.39 | 21500 | 0.1252 | 0.0858 | | 0.1251 | 10.63 | 22000 | 0.1253 | 0.0869 | | 0.1143 | 10.87 | 22500 | 0.1266 | 0.0860 | | 0.1155 | 11.12 | 23000 | 0.1219 | 0.0862 | | 0.1227 | 11.36 | 23500 | 0.1329 | 0.0864 | | 0.1229 | 11.6 | 24000 | 0.1244 | 0.0855 | | 0.1112 | 11.84 | 24500 | 0.1356 | 0.0851 | | 0.2163 | 12.08 | 25000 | 0.1252 | 0.0847 | | 0.1146 | 12.32 | 25500 | 0.1211 | 0.0837 | | 0.1058 | 12.57 | 26000 | 0.1247 | 0.0843 | | 0.1099 | 12.81 | 26500 | 0.1189 | 0.0833 | | 0.1028 | 13.05 | 27000 | 0.1303 | 0.0815 | | 0.1092 | 13.29 | 27500 | 0.1305 | 0.0838 | | 0.1076 | 13.53 | 28000 | 0.1276 | 0.0842 | | 0.1074 | 13.77 | 28500 | 0.1268 | 0.0844 | | 0.0971 | 14.02 | 29000 | 0.1322 | 0.0839 | | 0.1109 | 14.26 | 29500 | 0.1287 | 0.0821 | | 0.0991 | 14.5 | 30000 | 0.1289 | 0.0831 | | 0.1095 | 14.74 | 30500 | 0.1273 | 0.0822 | | 0.1015 | 14.98 | 31000 | 0.1326 | 0.0816 | | 0.1051 | 15.22 | 31500 | 0.1337 | 0.0814 | | 0.0894 | 15.47 | 32000 | 0.1331 | 0.0802 | | 0.1 | 15.71 | 32500 | 0.1304 | 0.0798 | | 0.0957 | 15.95 | 33000 | 0.1293 | 0.0824 | | 0.0921 | 16.19 | 33500 | 0.1382 | 0.0808 | | 0.0986 | 16.43 | 34000 | 0.1301 | 0.0788 | | 0.098 | 16.67 | 34500 | 0.1305 | 0.0795 | | 0.0974 | 16.92 | 35000 | 0.1325 | 0.0796 | | 0.0886 | 17.16 | 35500 | 0.1332 | 0.0796 | | 0.0892 | 17.4 | 36000 | 0.1327 | 0.0785 | | 0.0917 | 17.64 | 36500 | 0.1304 | 0.0793 | | 0.0919 | 17.88 | 37000 | 0.1353 | 0.0791 | | 0.1007 | 18.12 | 37500 | 0.1340 | 0.0791 | | 0.0831 | 18.37 | 38000 | 0.1327 | 0.0786 | | 0.0862 | 18.61 | 38500 | 0.1343 | 0.0792 | | 0.0837 | 18.85 | 39000 | 0.1334 | 0.0777 | | 0.0771 | 19.09 | 39500 | 0.1456 | 0.0778 | | 0.0841 | 19.33 | 40000 | 0.1365 | 0.0784 | | 0.0874 | 19.57 | 40500 | 0.1379 | 0.0779 | | 0.0773 | 19.82 | 41000 | 0.1359 | 0.0776 | | 0.0771 | 20.06 | 41500 | 0.1392 | 0.0776 | | 0.0861 | 20.3 | 42000 | 0.1395 | 0.0774 | | 0.0773 | 20.54 | 42500 | 0.1356 | 0.0775 | | 0.069 | 20.78 | 43000 | 0.1399 | 0.0765 | | 0.0823 | 21.02 | 43500 | 0.1469 | 0.0774 | | 0.0747 | 21.27 | 44000 | 0.1415 | 0.0768 | | 0.0703 | 21.51 | 44500 | 0.1405 | 0.0778 | | 0.0776 | 21.75 | 45000 | 0.1492 | 0.0778 | | 0.0833 | 21.99 | 45500 | 0.1448 | 0.0767 | | 0.0796 | 22.23 | 46000 | 0.1434 | 0.0761 | | 0.0613 | 22.47 | 46500 | 0.1446 | 0.0768 | | 0.0753 | 22.72 | 47000 | 0.1439 | 0.0757 | | 0.076 | 22.96 | 47500 | 0.1402 | 0.0759 | | 0.0619 | 23.2 | 48000 | 0.1473 | 0.0767 | | 0.1322 | 23.44 | 48500 | 0.1431 | 0.0766 | | 0.0691 | 23.68 | 49000 | 0.1452 | 0.0753 | | 0.061 | 23.92 | 49500 | 0.1452 | 0.0752 | | 0.0716 | 24.17 | 50000 | 0.1429 | 0.0756 | | 0.074 | 24.41 | 50500 | 0.1440 | 0.0746 | | 0.0696 | 24.65 | 51000 | 0.1459 | 0.0756 | | 0.081 | 24.89 | 51500 | 0.1443 | 0.0751 | | 0.0754 | 25.13 | 52000 | 0.1483 | 0.0755 | | 0.0864 | 25.37 | 52500 | 0.1467 | 0.0757 | | 0.0662 | 25.62 | 53000 | 0.1471 | 0.0748 | | 0.109 | 25.86 | 53500 | 0.1472 | 0.0759 | | 0.0682 | 26.1 | 54000 | 0.1539 | 0.0748 | | 0.0655 | 26.34 | 54500 | 0.1469 | 0.0743 | | 0.0651 | 26.58 | 55000 | 0.1553 | 0.0748 | | 0.0666 | 26.82 | 55500 | 0.1520 | 0.0744 | | 0.0724 | 27.07 | 56000 | 0.1526 | 0.0738 | | 0.067 | 27.31 | 56500 | 0.1489 | 0.0738 | | 0.0658 | 27.55 | 57000 | 0.1518 | 0.0738 | | 0.0581 | 27.79 | 57500 | 0.1518 | 0.0739 | | 0.0639 | 28.03 | 58000 | 0.1495 | 0.0736 | | 0.0606 | 28.27 | 58500 | 0.1549 | 0.0739 | | 0.0641 | 28.52 | 59000 | 0.1513 | 0.0735 | | 0.0612 | 28.76 | 59500 | 0.1524 | 0.0739 | | 0.0536 | 29.0 | 60000 | 0.1565 | 0.0741 | | 0.0574 | 29.24 | 60500 | 0.1541 | 0.0741 | | 0.057 | 29.48 | 61000 | 0.1555 | 0.0741 | | 0.0624 | 29.72 | 61500 | 0.1590 | 0.0736 | | 0.0531 | 29.97 | 62000 | 0.1590 | 0.0734 | | 0.0661 | 30.21 | 62500 | 0.1599 | 0.0732 | | 0.0641 | 30.45 | 63000 | 0.1576 | 0.0730 | | 0.0562 | 30.69 | 63500 | 0.1593 | 0.0734 | | 0.0527 | 30.93 | 64000 | 0.1604 | 0.0730 | | 0.0579 | 31.17 | 64500 | 0.1571 | 0.0734 | | 0.0508 | 31.42 | 65000 | 0.1603 | 0.0733 | | 0.0524 | 31.66 | 65500 | 0.1588 | 0.0726 | | 0.0564 | 31.9 | 66000 | 0.1571 | 0.0727 | | 0.0551 | 32.14 | 66500 | 0.1584 | 0.0728 | | 0.0564 | 32.38 | 67000 | 0.1565 | 0.0726 | | 0.0628 | 32.62 | 67500 | 0.1558 | 0.0725 | | 0.0561 | 32.87 | 68000 | 0.1582 | 0.0727 | | 0.0553 | 33.11 | 68500 | 0.1591 | 0.0726 | | 0.0504 | 33.35 | 69000 | 0.1590 | 0.0725 | | 0.0539 | 33.59 | 69500 | 0.1582 | 0.0723 | | 0.0576 | 33.83 | 70000 | 0.1579 | 0.0722 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "model-index": [{"name": "XLSR-1B-bokmaal-low", "results": []}]}
NbAiLab/XLSR-1B-bokmaal-low
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # XLSR-300M-bokmaal This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the NBAILAB/NPSC - 16K_MP3_BOKMAAL dataset. It achieves the following results on the evaluation set: - Loss: 0.1635 - Wer: 0.1005 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.0307 | 0.32 | 500 | 3.0026 | 1.0 | | 2.7865 | 0.64 | 1000 | 2.4849 | 0.9926 | | 0.7522 | 0.95 | 1500 | 0.4567 | 0.3594 | | 0.5703 | 1.27 | 2000 | 0.3440 | 0.2586 | | 0.4762 | 1.59 | 2500 | 0.2925 | 0.2178 | | 0.4585 | 1.91 | 3000 | 0.2442 | 0.1981 | | 0.4013 | 2.23 | 3500 | 0.2495 | 0.1818 | | 0.449 | 2.54 | 4000 | 0.2152 | 0.1808 | | 0.355 | 2.86 | 4500 | 0.2179 | 0.1670 | | 0.3142 | 3.18 | 5000 | 0.1953 | 0.1542 | | 0.3242 | 3.5 | 5500 | 0.2103 | 0.1526 | | 0.3016 | 3.82 | 6000 | 0.1911 | 0.1477 | | 0.2713 | 4.13 | 6500 | 0.1836 | 0.1422 | | 0.2807 | 4.45 | 7000 | 0.1924 | 0.1447 | | 0.2929 | 4.77 | 7500 | 0.1848 | 0.1402 | | 0.2595 | 5.09 | 8000 | 0.1783 | 0.1330 | | 0.2289 | 5.41 | 8500 | 0.1901 | 0.1313 | | 0.2567 | 5.72 | 9000 | 0.1784 | 0.1298 | | 0.2401 | 6.04 | 9500 | 0.1956 | 0.1298 | | 0.2098 | 6.36 | 10000 | 0.1748 | 0.1277 | | 0.2246 | 6.68 | 10500 | 0.1777 | 0.1254 | | 0.2197 | 7.0 | 11000 | 0.1703 | 0.1222 | | 0.2122 | 7.32 | 11500 | 0.1917 | 0.1221 | | 0.2746 | 7.63 | 12000 | 0.1769 | 0.1215 | | 0.2148 | 7.95 | 12500 | 0.1736 | 0.1193 | | 0.1915 | 8.27 | 13000 | 0.1814 | 0.1161 | | 0.2462 | 8.59 | 13500 | 0.1748 | 0.1166 | | 0.1872 | 8.91 | 14000 | 0.1769 | 0.1133 | | 0.1886 | 9.22 | 14500 | 0.1852 | 0.1143 | | 0.1789 | 9.54 | 15000 | 0.1696 | 0.1126 | | 0.1692 | 9.86 | 15500 | 0.1817 | 0.1122 | | 0.1765 | 10.18 | 16000 | 0.1769 | 0.1093 | | 0.1699 | 10.5 | 16500 | 0.1604 | 0.1084 | | 0.1591 | 10.81 | 17000 | 0.1777 | 0.1080 | | 0.1499 | 11.13 | 17500 | 0.1645 | 0.1074 | | 0.163 | 11.45 | 18000 | 0.1704 | 0.1065 | | 0.1597 | 11.77 | 18500 | 0.1576 | 0.1064 | | 0.1484 | 12.09 | 19000 | 0.1637 | 0.1041 | | 0.1464 | 12.4 | 19500 | 0.1631 | 0.1047 | | 0.156 | 12.72 | 20000 | 0.1686 | 0.1029 | | 0.1625 | 13.04 | 20500 | 0.1648 | 0.1023 | | 0.1395 | 13.36 | 21000 | 0.1688 | 0.1027 | | 0.1387 | 13.68 | 21500 | 0.1670 | 0.1013 | | 0.1434 | 13.99 | 22000 | 0.1677 | 0.1017 | | 0.1442 | 14.31 | 22500 | 0.1688 | 0.1008 | | 0.1439 | 14.63 | 23000 | 0.1647 | 0.1004 | | 0.137 | 14.95 | 23500 | 0.1636 | 0.1006 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
{"language": ["nb-NO"], "license": "apache-2.0", "tags": ["generated_from_trainer", "automatic-speech-recognition", "NbAiLab/NPSC", "robust-speech-event", false, "nb-NO", "hf-asr-leaderboard"], "datasets": ["NbAiLab/NPSC"], "model-index": [{"name": "XLSR-300M-bokmaal", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "NPSC", "type": "NbAiLab/NPSC", "args": "16K_mp3_bokmaal"}, "metrics": [{"type": "wer", "value": 0.07699635320946434, "name": "Test (Bokm\u00e5l) WER"}, {"type": "cer", "value": 0.0284288464829, "name": "Test (Bokm\u00e5l) CER"}]}]}]}
NbAiLab/XLSR-300M-bokmaal
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:NbAiLab/NPSC", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
{"language": ["nn-NO"], "license": "apache-2.0", "tags": ["generated_from_trainer", "automatic-speech-recognition", "NbAiLab/NPSC", "robust-speech-event", false, "nn-NO", "hf-asr-leaderboard"], "datasets": ["NbAiLab/NPSC"], "model-index": [{"name": "XLSR-300M-nynorsk", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "NPSC", "type": "NbAiLab/NPSC", "args": "16K_mp3_nynorsk"}, "metrics": [{"type": "wer", "value": 0.12136286840623241, "name": "Test (Nynorsk) WER"}, {"type": "cer", "value": 0.041988362534453025, "name": "Test (Nynorsk) CER"}]}]}]}
NbAiLab/XLSR-300M-nynorsk
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:NbAiLab/NPSC", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
NbAiLab/kenlm-5gram-npsc
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
zero-shot-classification
transformers
**Release 1.0** (March 11, 2021) # NB-Bert base model finetuned on Norwegian machine translated MNLI ## Description The most effective way of creating a good classifier is to finetune a pre-trained model for the specific task at hand. However, in many cases this is simply impossible. [Yin et al.](https://arxiv.org/abs/1909.00161) proposed a very clever way of using pre-trained MNLI models as zero-shot sequence classifiers. The methods works by reformulating the question to an MNLI hypothesis. If we want to figure out if a text is about "sport", we simply state that "This text is about sport" ("Denne teksten handler om sport"). When the model is finetuned on the 400k large MNLI task, it is in many cases able to solve this classification tasks. There are no MNLI-set of this size in Norwegian but we have trained it on a machine translated version of the original MNLI-set. ## Testing the model For testing the model, we recommend the [NbAiLab Colab Notebook](https://colab.research.google.com/gist/peregilk/769b5150a2f807219ab8f15dd11ea449/nbailab-mnli-norwegian-demo.ipynb) ## Hugging Face zero-shot-classification pipeline The easiest way to try this out is by using the Hugging Face pipeline. Please, note that you will get better results when using Norwegian hypothesis template instead of the default English one. ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="NbAiLab/nb-bert-base-mnli") ``` You can then use this pipeline to classify sequences into any of the class names you specify. ```python sequence_to_classify = 'Folkehelseinstituttets mest optimistiske anslag er at alle voksne er ferdigvaksinert innen midten av september.' candidate_labels = ['politikk', 'helse', 'sport', 'religion'] hypothesis_template = 'Dette eksempelet er {}.' classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template, multi_class=True) # {'labels': ['helse', 'politikk', 'sport', 'religion'], # 'scores': [0.4210019111633301, 0.0674605593085289, 0.000840459018945694, 0.0007541406666859984], # 'sequence': 'Folkehelseinstituttets mest optimistiske anslag er at alle over 18 år er ferdigvaksinert innen midten av september.'} ``` ## More information For more information on the model, see https://github.com/NBAiLab/notram Here you will also find a Colab explaining more in details how to use the zero-shot-classification pipeline.
{"language": false, "license": "cc-by-4.0", "tags": ["nb-bert", "zero-shot-classification", "pytorch", "tensorflow", "norwegian", "bert"], "datasets": ["mnli", "multi_nli", "xnli"], "thumbnail": "https://raw.githubusercontent.com/NBAiLab/notram/master/images/nblogo_2.png", "pipeline_tag": "zero-shot-classification", "widget": [{"example_title": "Nyhetsartikkel om FHI", "text": "Folkehelseinstituttets mest optimistiske anslag er at alle voksne er ferdigvaksinert innen midten av september.", "candidate_labels": "helse, politikk, sport, religion"}]}
NbAiLab/nb-bert-base-mnli
null
[ "transformers", "pytorch", "jax", "safetensors", "bert", "text-classification", "nb-bert", "zero-shot-classification", "tensorflow", "norwegian", "no", "dataset:mnli", "dataset:multi_nli", "dataset:xnli", "arxiv:1909.00161", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
**Release 1.0** (November 17, 2021) # nb-bert-base-ner ## Description NB-Bert base model fine-tuned on the Named Entity Recognition task using the [NorNE dataset](https://huggingface.co/datasets/NbAiLab/norne). ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("NbAiLab/nb-bert-base-ner") model = AutoModelForTokenClassification.from_pretrained("NbAiLab/nb-bert-base-ner") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Jeg heter Kjell og bor i Oslo." ner_results = nlp(example) print(ner_results) ```
{"language": false, "license": "cc-by-4.0", "tags": ["norwegian", "bert", "ner"], "datasets": ["norne"], "thumbnail": "nblogo_3.png", "pipeline_tag": "token-classification", "inference": {"parameters": {"aggregation_strategy": "first"}}, "widget": [{"text": "Trond Giske har bekreftet p\u00e5 sp\u00f8rsm\u00e5l fra Adresseavisen at Hansen leide et rom i hans leilighet i Trondheim."}]}
NbAiLab/nb-bert-base-ner
null
[ "transformers", "pytorch", "safetensors", "bert", "token-classification", "norwegian", "ner", "no", "dataset:norne", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# NB-BERT-base Sámi Relevant This a model capable of predicting when a chunk of text could potentially be of interest to the Sámi Bibliographers at the National Library of Norway.
{"language": ["se", "no", "en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["sami relevant"], "metrics": ["matthews_correlation"], "pipeline_tag": "text-classification", "widget": [{"text": "Riddu Ri\u0111\u0111u Festiv\u00e1la lea jahk\u00e1sa\u0161 musihkka- ja -kulturfestiv\u00e1la mii l\u00e1giduvvo G\u00e1ivuonas Davvi-Romssas."}, {"text": "The S\u00e1mi languages form a branch of the Uralic language family. According to the traditional view, S\u00e1mi is within the Uralic family most closely related to the Finnic languages (Sammallahti 1998)."}, {"text": "Joseph Robinette Biden Jr. is an American politician who is the 46th and current president of the United States."}]}
NbAiLab/nb-bert-base-sami-relevant
null
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "sami relevant", "se", "no", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00