Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
null
null
{}
Leonis/bart-large-finetuned-med
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Leonis/t5-base-finetuned-med
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Leonis/xlnet-base-cased-finetuned-med
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Leonis/xlnet-base-cased-finetuned-xsum
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Leostronkest/DialoGPT-large
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Michael DialoGPT Model
{"tags": ["conversational"]}
Leostronkest/DialoGPT-small-michael
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
## A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT) DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations. The [human evaluation results](https://github.com/dreasysnail/Dialogpt_dev#human-evaluation) indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test. The model is trained on 147M multi-turn dialogue from Reddit discussion thread. * Multi-turn generation examples from an interactive environment: |Role | Response | |---------|--------| |User | Does money buy happiness? | | Bot | Depends how much money you spend on it .| |User | What is the best way to buy happiness ? | | Bot | You just have to be a millionaire by your early 20s, then you can be happy . | |User |This is so difficult ! | | Bot | You have no idea how hard it is to be a millionaire and happy . There is a reason the rich have a lot of money | Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT) ArXiv paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536) ### How to use Now we are ready to try out how the model works as a chatting partner! ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-large") model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-large") # Let's chat for 5 lines for step in range(5): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id) # pretty print last ouput tokens from bot print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
{"license": "mit", "tags": ["conversational"], "thumbnail": "https://huggingface.co/front/thumbnails/dialogpt.png"}
Leostronkest/DialoGPT
null
[ "transformers", "pytorch", "tf", "jax", "gpt2", "text-generation", "conversational", "arxiv:1911.00536", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Leostronkest/convotest
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Letsssl/Kinx
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
{}
LeverageX/finbert-wechsel-korean
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
# scibert-wechsel-korean Scibert(🇺🇸) converted into Korean(🇰🇷) using WECHSEL technique. ### Description - SciBERT is trained on papers from the corpus of semanticscholar.org. Corpus size is 1.14M papers, 3.1B tokens. - Wechsel is converting embedding layer's subword tokens from source language to target language. - SciBERT trained with English language is converted into Korean langauge using Wechsel technique. - Korean tokenizer is selected with KLUE PLMs' tokenizers due to its similar vocab size(32000) and performance. ### Reference - [Scibert](https://github.com/allenai/scibert) - [WECHSEL](https://github.com/CPJKU/wechsel) - [Korean Language Understanding Evaluation](https://github.com/KLUE-benchmark/KLUE)
{}
LeverageX/scibert-wechsel-korean
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Jake99 DialoGPT model
{"tags": ["conversational"]}
Leviii03/Dialogpt-small-Jake99
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Leydra/distilbert-base-uncased-finetuned-ner
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
[bert-base-uncased](https://huggingface.co/bert-base-uncased) fine-tuned on the [QNLI](https://huggingface.co/datasets/glue) dataset for 2 epochs. The fine-tuning process was performed on 2x NVIDIA GeForce GTX 1080 Ti GPUs (11GB). The parameters are: ``` max_seq_length=512 per_device_train_batch_size=8 gradient_accumulation_steps=2 total train batch size (w. parallel, distributed & accumulation) = 32 learning_rate=3e-5 ``` ## Evaluation results eval_accuracy = 0.916895 ## More information The QNLI (Question-answering NLI) dataset is a Natural Language Inference dataset automatically derived from the Stanford Question Answering Dataset v1.1 (SQuAD). SQuAD v1.1 consists of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The dataset was converted into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue. The QNLI dataset is part of GLEU benchmark. (source: https://paperswithcode.com/dataset/qnli)
{}
Li/bert-base-uncased-qnli
null
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
[roberta-base](https://huggingface.co/roberta-base) fine-tuned on the [SQuAD2](https://rajpurkar.github.io/SQuAD-explorer) dataset for 2 epochs. The fine-tuning process was performed on a single NVIDIA Tesla T4 GPU (15GB). The hyperparameters are: ``` max_seq_length=512 per_device_train_batch_size=8 gradient_accumulation_steps=4 total train batch size (w. parallel, distributed & accumulation) = 32 learning_rate=3e-5 ``` ## Evaluation results ``` "eval_exact": 80.33352985766024, "eval_f1": 83.38322909593009, "eval_HasAns_exact": 77.81713900134953, "eval_HasAns_f1": 83.925283241562, "eval_HasAns_total": 5928, "eval_NoAns_exact": 82.84272497897393, "eval_NoAns_f1": 82.84272497897393, "eval_NoAns_total": 5945, "eval_best_exact": 80.33352985766024, "eval_best_exact_thresh": 0.0, "eval_best_f1": 83.38322909593005, "eval_best_f1_thresh": 0.0, "eval_samples": 11955, "eval_total": 11873, ``` ## More information Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. (https://rajpurkar.github.io/SQuAD-explorer/)
{}
Li/roberta-base-squad2
null
[ "transformers", "pytorch", "safetensors", "roberta", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
LiYouL666/marian-finetuned-kde4-en-to-fr
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
{}
Liam/NRL-full
null
[ "transformers", "tf", "bert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
{}
Liam/NRL
null
[ "transformers", "tf", "bert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
LightV/albert-base-v2-SST-2-finetuned-sst2
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
LilaBoualili/bert-pre-doc
null
[ "transformers", "pytorch", "tf", "jax", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
LilaBoualili/bert-pre-pair
null
[ "transformers", "pytorch", "tf", "jax", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
LilaBoualili/bert-sim-doc
null
[ "transformers", "pytorch", "tf", "jax", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
At its core it uses an BERT-Base model (bert-base-uncased) fine-tuned on the MS MARCO passage classification task using the Sim-Pair marking strategy that highlights exact term matches between the query and the passage via marker tokens (#). It can be loaded using the TF/AutoModelForSequenceClassification classes. Refer to our [github repository](https://github.com/BOUALILILila/ExactMatchMarking) for a usage example for ad hoc ranking.
{}
LilaBoualili/bert-sim-pair
null
[ "transformers", "pytorch", "tf", "jax", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
At its core it uses a BERT-Base model (bert-base-uncased) fine-tuned on the MS MARCO passage classification task. It can be loaded using the TF/AutoModelForSequenceClassification classes. Refer to our [github repository](https://github.com/BOUALILILila/ExactMatchMarking) for a usage example for ad hoc ranking.
{}
LilaBoualili/bert-vanilla
null
[ "transformers", "pytorch", "tf", "jax", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
LilaBoualili/electra-pre-doc
null
[ "transformers", "pytorch", "tf", "electra", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
LilaBoualili/electra-pre-pair
null
[ "transformers", "pytorch", "tf", "electra", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
LilaBoualili/electra-sim-doc
null
[ "transformers", "pytorch", "tf", "electra", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
At its core it uses an ELECTRA-Base model (google/electra-base-discriminator) fine-tuned on the MS MARCO passage classification task using the Sim-Pair marking strategy that highlights exact term matches between the query and the passage via marker tokens (#). It can be loaded using the TF/AutoModelForSequenceClassification classes but it follows the same classification layer defined for BERT similarly to the TFElectraRelevanceHead in the Capreolus BERT-MaxP implementation. Refer to our [github repository](https://github.com/BOUALILILila/ExactMatchMarking) for a usage example for ad hoc ranking.
{}
LilaBoualili/electra-sim-pair
null
[ "transformers", "pytorch", "tf", "electra", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
At its core it uses an ELECTRA-Base model (google/electra-base-discriminator) fine-tuned on the MS MARCO passage classification task. It can be loaded using the TF/AutoModelForSequenceClassification classes but it follows the same classification layer defined for BERT similarly to the TFElectraRelevanceHead in the Capreolus BERT-MaxP implementation. Refer to our [github repository](https://github.com/BOUALILILila/ExactMatchMarking) for a usage example for ad hoc ranking.
{}
LilaBoualili/electra-vanilla
null
[ "transformers", "pytorch", "tf", "electra", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Linganesan/pegasus_pretrained
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
okuma lan kardeş,im
{}
LinuxMac/denema
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
LinuxMac/dert
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Linzan/jjjnj
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Lionheart/distilbert-base-uncased-finetuned-cola
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
## End-to-end Conversational search model A end-to-end system of conversational search system for online shopping. It was introduced in [this paper](https://arxiv.org/abs/2109.05460) published on conference EMNLP. ## Model description ConvSearch is an end-to-end conversational search system that deeply combines the dialog and search system to improve the search performance. In particular, the Product Search module leverages both structured product attributes and unstructured product text (e.g. profile), where the product text may contain phrases matching with utterances when schema is incomplete or when a product attribute value is missing. Putting together, our system has the advantage of both reduced error accumulation along individual modules, and enhanced robustness against product schema/knowledge gaps. ## Intended uses & limitations You can use the raw model to understand the dialog between consumer and server. The concatenated dialogs can be parsed into intents (e.g. inform, request, buy, et al.) and attributes of products. You can also fine-tune this model on similar down-stream tasks, such as a dialog system for shopping in your scenario or customer service system. Since our model is seq-to-seq, any dialog system that can be reformed to seq-to-seq can be implemented based on this model. ## How to use You can use this model directly with: from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("LiqiangXiao/ConvSearch_QU") model = AutoModelForSeq2SeqLM.from_pretrained("LiqiangXiao/ConvSearch_QU") ## Training data ConvSearch was pretrained on a dialog corpus with 49,999 dialogs/942,766 turns.
{}
LiqiangXiao/ConvSearch_QU
null
[ "transformers", "pytorch", "bart", "text2text-generation", "arxiv:2109.05460", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
## Copy-or-Rewrite This repository contains the code of paper "Copy or Rewrite: Hybrid Summarization with Hierarchical Reinforcement Learning". A model built for human-like summarization task and trained with Actor-critic Reinforcement Learning. This work significantly improved the ROUGE scores on CNN/DM dataset by 1.7 and augmented the informativity and readability of generated summaries. It implemented a more human-like workflow for summarization task solving the information loss problem. It contains a novel hierarchical transformer module to represent article in both word and sentence level, a new reinforcement learning method that can effectively train two-step model. ## Model description Copy-or-Rewrite is a model to improve the workflow of summarization models. Existing methods that adopt an extract-then-abstract strategy have achieved impressive results, yet they suffer from the information loss in the abstraction step because they compress all the selected sentences without distinguish. Especially when the whole sentence is summary-worthy, salient content would be lost by compression. To address this problem, we pro- pose HYSUM, a hybrid framework for summarization that can flexibly switch between copying sentence and rewriting sentence according to the degree of redundancy. In this way, our approach can effectively combine the advantages of two branches of summarization, juggling informativity and conciseness. Moreover, we based on Hierarchical Reinforcement Learning, propose an end-to-end reinforcing method to bridge together the extraction module and rewriting module, which can enhance the cooperation between them. Automatic evaluation shows that our approach significantly outperforms the state-of-the-arts on the CNN/DailyMail corpus. Human evaluation also demonstrates that our generated summaries are more informative and concise than popular models. ## Intended uses & limitations With this repository, you can generate informative and concise summaries for input articles. For other tasks, you may used the hierarchical representation module to effectively represent the article. The parameters of the model is pre-trained on CNN/DM dataset. You may need to fine-tune it other your own dataset when needed. ## How to use from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("LiqiangXiao/summarization") model = AutoModelForSeq2SeqLM.from_pretrained("LiqiangXiao/summarization") ## Training data This model used the non-anonymous version of CNN/Daily Mail dataset. ## BibTeX entry and citation info @inproceedings{DBLP:conf/aaai/XiaoWHJ20, author = {Liqiang Xiao and Lu Wang and Hao He and Yaohui Jin}, title = {Copy or Rewrite: Hybrid Summarization with Hierarchical Reinforcement Learning}, booktitle = {The Thirty-Fourth {AAAI} Conference on Artificial Intelligence, {AAAI} 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, {IAAI} 2020, The Tenth {AAAI} Symposium on Educational Advances in Artificial Intelligence, {EAAI} 2020, New York, NY, USA, February 7-12, 2020}, pages = {9306--9313}, publisher = {{AAAI} Press}, year = {2020}, url = {https://aaai.org/ojs/index.php/AAAI/article/view/6470}, timestamp = {Tue, 02 Feb 2021 08:00:14 +0100}, biburl = {https://dblp.org/rec/conf/aaai/XiaoWHJ20.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
{}
LiqiangXiao/summarization
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
LittleAxl/Axl
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# bert-base-cased-sentiment Es un modelo de BERT (bert-base-cased) afinado para el analisis de sentimientos para dos clases. El sentimiento solo se define como positivo negativo según sea el caso de la oración suministrada. ## Training data El set de datos utilizado para el entrenamiento del modelo fue a traves de una recopilación de reseñas de amazón, el cual se puede descargar desde el autor original en Kaggle [Adam Bittlingmayer](https://www.kaggle.com/bittlingmayer/amazonreviews) Amazon Reviews for Sentiment Analysis. El numero de datos fue solo de 40 000 oraciones de las cuales solo se tomaron las primeras 100 palabras para conformar cada una de las oraciones. ## Accuaracy El modelo afinado fue sometido a 3 pruebas para conocer su precisión. - La primera prueba fue en un set de datos de Reseñas de hoteles | Accuracy (Precisión) | | -------- | | 95% | - La segunda prueba fue en un set de datos de Reseñas de comida | Accuracy (Precisión) | | -------- | | 88% | - La tercera prueba fue en un set de datos de Sentimientos generales | Accuracy (Precisión) | | -------- | | 65% | ## Contact Contacto a traves de github [Murdoocc7](https://github.com/murdoocc)
{"language": ["en"], "pipeline_tag": "text-classification"}
Littlejohn/analisis_sentimientos
null
[ "transformers", "text-classification", "en", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clm-total This model is a fine-tuned version of [ckiplab/gpt2-base-chinese](https://huggingface.co/ckiplab/gpt2-base-chinese) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.8586 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cpu - Datasets 1.17.0 - Tokenizers 0.10.3
{"language": ["zh"], "license": "gpl-3.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "clm-total", "results": []}]}
Littlemilk/autobiography-generator
null
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "zh", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
LivingIceCream/MichealScott
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Livvyangel1/F
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Peter from Your Boyfriend Game.
{"tags": ["conversational"]}
Lizardon/Peterbot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Lkhagvasuren/bert-finetuned-ner
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Lkshd/1
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
# QuBERTa QuBERTa es un modelo de lenguaje basado en RoBERTa para el quechua. Nuestro modelo de lenguaje fue pre-entrenado con 5M de tokens del quechua sureño (Collao y Chanka). El modelo utiliza un tokenizador Byte-level BPE con un vocabulario de 52000 tokens de subpalabras. ## Usabilidad Una vez descargado los pesos y el tokenizador es necesario adjuntarlo en un sola carpeta, en este caso fue `QuBERTa `. ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="./QuBERTa", tokenizer="./QuBERTa" ) ``` Se hace la prueba, la cual esta en fases de mejoras. ```python fill_mask("allinllachu <mask> allinlla huk wasipita.") ``` [{'score': 0.23992203176021576, 'sequence': 'allinllachu nisqaqa allinlla huk wasipita.', 'token': 334, 'token_str': ' nisqaqa'}, {'score': 0.061005301773548126, 'sequence': 'allinllachu, allinlla huk wasipita.', 'token': 16, 'token_str': ','}, {'score': 0.028720015659928322, 'sequence': "allinllachu' allinlla huk wasipita.", 'token': 11, 'token_str': "'"}, {'score': 0.012927944771945477, 'sequence': 'allinllachu kay allinlla huk wasipita.', 'token': 377, 'token_str': ' kay'}, {'score': 0.01230092253535986, 'sequence': 'allinllachu. allinlla huk wasipita.', 'token': 18, 'token_str': '.'}]
{"language": ["qu"], "tags": ["Llamacha"]}
Llamacha/QuBERTa
null
[ "transformers", "pytorch", "roberta", "fill-mask", "Llamacha", "qu", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
lna/distilbert-base-uncased-finetuned-cola
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
LockonZero/distilbert-base-uncased-finetuned-ner
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
This model is for anyone using using Flux.jl and looking for a test model to make sue of the Hugging Face hub. You can see the source code to generate this model below: ```Julia julia> using Flux julia> model = Chain(Dense(10, 5, NNlib.relu), Dense(5, 2), NNlib.softmax) Chain(Dense(10, 5, NNlib.relu), Dense(5, 2), NNlib.softmax) julia> using BSON: @save julia> @save "mymodel.bson" model ``` you can then load the model in Julia as follows: ```Julia julia> using Flux julia> using BSON: @load julia> @load "mymodel.bson" model julia> model Chain(Dense(10, 5, NNlib.relu), Dense(5, 2), NNlib.softmax) ``` See here: https://fluxml.ai/Flux.jl/stable/saving/#Saving-and-Loading-Models for more details!
{}
LoganKilpatrick/BasicFluxjlModel
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Lokinfa/Dakyfa24_fgtg
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
Aaaa
{}
Lolamarcon/Migo
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Loloud/HarryPotter
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
## README
{}
Longines/test_repo
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Lopsy/okay
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# GePpeTto GPT2 Model 🇮🇹 Pretrained GPT2 117M model for Italian. You can find further details in the paper: Lorenzo De Mattei, Michele Cafagna, Felice Dell’Orletta, Malvina Nissim, Marco Guerini "GePpeTto Carves Italian into a Language Model", arXiv preprint. Pdf available at: https://arxiv.org/abs/2004.14253 ## Pretraining Corpus The pretraining set comprises two main sources. The first one is a dump of Italian Wikipedia (November 2019), consisting of 2.8GB of text. The second one is the ItWac corpus (Baroni et al., 2009), which amounts to 11GB of web texts. This collection provides a mix of standard and less standard Italian, on a rather wide chronological span, with older texts than the Wikipedia dump (the latter stretches only to the late 2000s). ## Pretraining details This model was trained using GPT2's Hugging Face implemenation on 4 NVIDIA Tesla T4 GPU for 620k steps. Training parameters: - GPT-2 small configuration - vocabulary size: 30k - Batch size: 32 - Block size: 100 - Adam Optimizer - Initial learning rate: 5e-5 - Warm up steps: 10k ## Perplexity scores | Domain | Perplexity | |---|---| | Wikipedia | 26.1052 | | ItWac | 30.3965 | | Legal | 37.2197 | | News | 45.3859 | | Social Media | 84.6408 | For further details, qualitative analysis and human evaluation check out: https://arxiv.org/abs/2004.14253 ## Load Pretrained Model You can use this model by installing Huggingface library `transformers`. And you can use it directly by initializing it like this: ```python from transformers import GPT2Tokenizer, GPT2Model model = GPT2Model.from_pretrained('LorenzoDeMattei/GePpeTto') tokenizer = GPT2Tokenizer.from_pretrained( 'LorenzoDeMattei/GePpeTto', ) ``` ## Example using GPT2LMHeadModel ```python from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline, GPT2Tokenizer tokenizer = AutoTokenizer.from_pretrained("LorenzoDeMattei/GePpeTto") model = AutoModelWithLMHead.from_pretrained("LorenzoDeMattei/GePpeTto") text_generator = pipeline('text-generation', model=model, tokenizer=tokenizer) prompts = [ "Wikipedia Geppetto", "Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso"] samples_outputs = text_generator( prompts, do_sample=True, max_length=50, top_k=50, top_p=0.95, num_return_sequences=3 ) for i, sample_outputs in enumerate(samples_outputs): print(100 * '-') print("Prompt:", prompts[i]) for sample_output in sample_outputs: print("Sample:", sample_output['generated_text']) print() ``` Output is, ``` ---------------------------------------------------------------------------------------------------- Prompt: Wikipedia Geppetto Sample: Wikipedia Geppetto rosso (film 1920) Geppetto rosso ("The Smokes in the Black") è un film muto del 1920 diretto da Henry H. Leonard. Il film fu prodotto dalla Selig Poly Sample: Wikipedia Geppetto Geppetto ("Geppetto" in piemontese) è un comune italiano di 978 abitanti della provincia di Cuneo in Piemonte. L'abitato, che si trova nel versante valtellinese, si sviluppa nella Sample: Wikipedia Geppetto di Natale (romanzo) Geppetto di Natale è un romanzo di Mario Caiano, pubblicato nel 2012. ---------------------------------------------------------------------------------------------------- Prompt: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso Sample: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso. Il burattino riesce a scappare. Dopo aver trovato un prezioso sacchetto si reca Sample: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso, e l'unico che lo possiede, ma, di fronte a tutte queste prove Sample: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso: - A voi gli occhi, le guance! A voi il mio pezzo! ``` ## Citation Please use the following bibtex entry: ``` @misc{mattei2020geppetto, title={GePpeTto Carves Italian into a Language Model}, author={Lorenzo De Mattei and Michele Cafagna and Felice Dell'Orletta and Malvina Nissim and Marco Guerini}, year={2020}, eprint={2004.14253}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## References Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky wide web: a collection of very large linguistically processed webcrawled corpora. Language resources and evaluation, 43(3):209–226.
{"language": "it"}
LorenzoDeMattei/GePpeTto
null
[ "transformers", "pytorch", "jax", "safetensors", "gpt2", "text-generation", "it", "arxiv:2004.14253", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
image-classification
transformers
# lawn-weeds Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### clover ![clover](images/clover.jpg) #### dichondra ![dichondra](images/dichondra.jpg) #### grass ![grass](images/grass.jpg)
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
LorenzoDeMattei/lawn-weeds
null
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
## AllenAI's <i>scibert_scivocab_uncased</i> fine-tuned on SQuAD 2.0 evaluated with F1 = 86.85 #### To load the model: ``` from transformers import BertTokenizerFast from transformers import BertForQuestionAnswering tokenizer = BertTokenizerFast.from_pretrained("LoudlySoft/scibert_scivocab_uncased_squad") model = BertForQuestionAnswering.from_pretrained("LoudlySoft/scibert_scivocab_uncased_squad") ```
{}
LoudlySoft/scibert_scivocab_uncased_squad
null
[ "transformers", "pytorch", "jax", "safetensors", "bert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Loudogg/Loudogg
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
LouisYZK/dds
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Aqua
{"tags": ["conversational"]}
Lovery/Aqua
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
```python import jieba_fast from transformers import BertTokenizer from transformers import BigBirdModel class JiebaTokenizer(BertTokenizer): def __init__( self, pre_tokenizer=lambda x: jieba_fast.cut(x, HMM=False), *args, **kwargs ): super().__init__(*args, **kwargs) self.pre_tokenizer = pre_tokenizer def _tokenize(self, text, *arg, **kwargs): split_tokens = [] for word in self.pre_tokenizer(text): if word in self.vocab: split_tokens.append(word) else: split_tokens.extend(super()._tokenize(word)) return split_tokens model = BigBirdModel.from_pretrained('Lowin/chinese-bigbird-base-4096') tokenizer = JiebaTokenizer.from_pretrained('Lowin/chinese-bigbird-base-4096') ``` https://github.com/LowinLi/chinese-bigbird
{"language": ["zh"], "license": ["apache-2.0"]}
Lowin/chinese-bigbird-base-4096
null
[ "transformers", "pytorch", "big_bird", "fill-mask", "zh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
```python import jieba_fast from transformers import BertTokenizer from transformers import BigBirdModel class JiebaTokenizer(BertTokenizer): def __init__( self, pre_tokenizer=lambda x: jieba_fast.cut(x, HMM=False), *args, **kwargs ): super().__init__(*args, **kwargs) self.pre_tokenizer = pre_tokenizer def _tokenize(self, text, *arg, **kwargs): split_tokens = [] for text in self.pre_tokenizer(text): if text in self.vocab: split_tokens.append(text) else: split_tokens.extend(super()._tokenize(text)) return split_tokens model = BigBirdModel.from_pretrained('Lowin/chinese-bigbird-mini-1024') tokenizer = JiebaTokenizer.from_pretrained('Lowin/chinese-bigbird-mini-1024') ``` https://github.com/LowinLi/chinese-bigbird
{"language": ["zh"], "license": ["apache-2.0"]}
Lowin/chinese-bigbird-mini-1024
null
[ "transformers", "pytorch", "big_bird", "fill-mask", "zh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
```python import jieba_fast from transformers import BertTokenizer from transformers import BigBirdModel class JiebaTokenizer(BertTokenizer): def __init__( self, pre_tokenizer=lambda x: jieba_fast.cut(x, HMM=False), *args, **kwargs ): super().__init__(*args, **kwargs) self.pre_tokenizer = pre_tokenizer def _tokenize(self, text, *arg, **kwargs): split_tokens = [] for text in self.pre_tokenizer(text): if text in self.vocab: split_tokens.append(text) else: split_tokens.extend(super()._tokenize(text)) return split_tokens model = BigBirdModel.from_pretrained('Lowin/chinese-bigbird-small-1024') tokenizer = JiebaTokenizer.from_pretrained('Lowin/chinese-bigbird-small-1024') ``` https://github.com/LowinLi/chinese-bigbird
{"language": ["zh"], "license": ["apache-2.0"]}
Lowin/chinese-bigbird-small-1024
null
[ "transformers", "pytorch", "big_bird", "feature-extraction", "zh", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
```python import jieba_fast from transformers import BertTokenizer from transformers import BigBirdModel class JiebaTokenizer(BertTokenizer): def __init__( self, pre_tokenizer=lambda x: jieba_fast.cut(x, HMM=False), *args, **kwargs ): super().__init__(*args, **kwargs) self.pre_tokenizer = pre_tokenizer def _tokenize(self, text, *arg, **kwargs): split_tokens = [] for text in self.pre_tokenizer(text): if text in self.vocab: split_tokens.append(text) else: split_tokens.extend(super()._tokenize(text)) return split_tokens model = BigBirdModel.from_pretrained('Lowin/chinese-bigbird-tiny-1024') tokenizer = JiebaTokenizer.from_pretrained('Lowin/chinese-bigbird-tiny-1024') ``` https://github.com/LowinLi/chinese-bigbird
{"language": ["zh"], "license": ["apache-2.0"]}
Lowin/chinese-bigbird-tiny-1024
null
[ "transformers", "pytorch", "big_bird", "feature-extraction", "zh", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
```python from transformers import BertTokenizer from transformers import BigBirdModel model = BigBirdModel.from_pretrained('Lowin/chinese-bigbird-wwm-base-4096') tokenizer = BertTokenizer.from_pretrained('Lowin/chinese-bigbird-wwm-base-4096') ``` https://github.com/LowinLi/chinese-bigbird
{"language": ["zh"], "license": ["apache-2.0"]}
Lowin/chinese-bigbird-wwm-base-4096
null
[ "transformers", "pytorch", "big_bird", "fill-mask", "zh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
LuanVieir/toxic
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
First-try
{}
LucasLi/Transformer
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
LucasS/albertABSA
null
[ "transformers", "pytorch", "albert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
LucasS/bertLargeABSA
null
[ "transformers", "pytorch", "bert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
LucasS/bigbirdABSA
null
[ "transformers", "pytorch", "big_bird", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
LucasS/distilBertABSA
null
[ "transformers", "pytorch", "distilbert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
LucasS/robertaABSA
null
[ "transformers", "pytorch", "roberta", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
LucasS/robertaBaseABSA
null
[ "transformers", "pytorch", "roberta", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# XiaoBot for Discord [Tutorial](https://youtu.be/UjDpW_SOrlw) followed for this model.
{"tags": ["conversational"]}
Lucdi90/DialoGPT-medium-XiaoBot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-portuguese-cased-finetuned-peticoes This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0878 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 215 | 1.1349 | | No log | 2.0 | 430 | 1.0925 | | 1.3219 | 3.0 | 645 | 1.0946 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
{"language": ["pt"], "license": "mit", "tags": ["generated_from_trainer"], "widget": [{"text": "Com efeito, se tal fosse poss\u00edvel, o Poder [MASK] \u2013 que n\u00e3o disp\u00f5e de fun\u00e7\u00e3o legislativa \u2013 passaria a desempenhar atribui\u00e7\u00e3o que lhe \u00e9 institucionalmente estranha (a de legislador positivo), usurpando, desse modo, no contexto de um sistema de poderes essencialmente limitados, compet\u00eancia que n\u00e3o lhe pertence, com evidente transgress\u00e3o ao princ\u00edpio constitucional da separa\u00e7\u00e3o de poderes."}], "model-index": [{"name": "bert-base-portuguese-cased-finetuned-peticoes", "results": []}]}
Luciano/bert-base-portuguese-cased-finetuned-peticoes
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "bert", "fill-mask", "generated_from_trainer", "pt", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-portuguese-cased-finetuned-tcu-acordaos This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5765 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.7308 | 1.0 | 1383 | 0.6286 | | 0.6406 | 2.0 | 2766 | 0.5947 | | 0.6033 | 3.0 | 4149 | 0.5881 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.2 - Tokenizers 0.10.3
{"language": ["pt"], "license": "mit", "tags": ["generated_from_trainer"], "widget": [{"text": "Com efeito, se tal fosse poss\u00edvel, o Poder [MASK] \u2013 que n\u00e3o disp\u00f5e de fun\u00e7\u00e3o legislativa \u2013 passaria a desempenhar atribui\u00e7\u00e3o que lhe \u00e9 institucionalmente estranha (a de legislador positivo), usurpando, desse modo, no contexto de um sistema de poderes essencialmente limitados, compet\u00eancia que n\u00e3o lhe pertence, com evidente transgress\u00e3o ao princ\u00edpio constitucional da separa\u00e7\u00e3o de poderes."}], "model-index": [{"name": "bert-base-portuguese-cased-finetuned-tcu-acordaos", "results": []}]}
Luciano/bert-base-portuguese-cased-finetuned-tcu-acordaos
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "bert", "fill-mask", "generated_from_trainer", "pt", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bertimbau-base-lener_br This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the lener_br dataset. It achieves the following results on the evaluation set: - Loss: 0.2298 - Precision: 0.8501 - Recall: 0.9138 - F1: 0.8808 - Accuracy: 0.9693 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0686 | 1.0 | 1957 | 0.1399 | 0.7759 | 0.8669 | 0.8189 | 0.9641 | | 0.0437 | 2.0 | 3914 | 0.1457 | 0.7997 | 0.8938 | 0.8441 | 0.9623 | | 0.0313 | 3.0 | 5871 | 0.1675 | 0.8466 | 0.8744 | 0.8603 | 0.9651 | | 0.0201 | 4.0 | 7828 | 0.1621 | 0.8713 | 0.8839 | 0.8775 | 0.9718 | | 0.0137 | 5.0 | 9785 | 0.1811 | 0.7783 | 0.9159 | 0.8415 | 0.9645 | | 0.0105 | 6.0 | 11742 | 0.1836 | 0.8568 | 0.9009 | 0.8783 | 0.9692 | | 0.0105 | 7.0 | 13699 | 0.1649 | 0.8339 | 0.9125 | 0.8714 | 0.9725 | | 0.0059 | 8.0 | 15656 | 0.2298 | 0.8501 | 0.9138 | 0.8808 | 0.9693 | | 0.0051 | 9.0 | 17613 | 0.2210 | 0.8437 | 0.9045 | 0.8731 | 0.9693 | | 0.0061 | 10.0 | 19570 | 0.2499 | 0.8627 | 0.8946 | 0.8784 | 0.9681 | | 0.0041 | 11.0 | 21527 | 0.1985 | 0.8560 | 0.9052 | 0.8799 | 0.9720 | | 0.003 | 12.0 | 23484 | 0.2204 | 0.8498 | 0.9065 | 0.8772 | 0.9699 | | 0.0014 | 13.0 | 25441 | 0.2152 | 0.8425 | 0.9067 | 0.8734 | 0.9709 | | 0.0005 | 14.0 | 27398 | 0.2317 | 0.8553 | 0.8987 | 0.8765 | 0.9705 | | 0.0015 | 15.0 | 29355 | 0.2436 | 0.8543 | 0.8989 | 0.8760 | 0.9700 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Datasets 1.9.0 - Tokenizers 0.10.3
{"language": ["pt"], "license": "mit", "tags": ["generated_from_trainer"], "datasets": ["lener_br"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "bertimbau-base-lener_br", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "lener_br", "type": "lener_br", "args": "lener_br"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9692504609383333}}]}], "base_model": "neuralmind/bert-base-portuguese-cased", "model-index": [{"name": "Luciano/bertimbau-base-lener_br", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "lener_br", "type": "lener_br", "config": "lener_br", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.9824282794418222, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDZiZTRmMzRiZDFjOGMzZTM3ODRmNTEwNjI5OTM2ZDhlZjViMDk0YmJjOWViYjM3YmJmZGI2MjJiOTI3OGNmZCIsInZlcnNpb24iOjF9.7DVb3B_moqPXev5yxjcCvBCZDcJdmm3qZsSrp-RVOggLEr_AUfkBrF_76tDVLs9DszD1AlW4ERXcc0ZCqSCaDw"}, {"type": "precision", "value": 0.9877557596262284, "name": "Precision", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTE2MGQ4ZGM1NTEwOGFmMjM3ODAyYTg3MWM1YjVhZGVlYThiNzFjYTE4NWJhOTU3OWZjMjhkODcwNGNiMmIxMyIsInZlcnNpb24iOjF9.G1e_jAOIDcuaOXWNjeRqlHTqJHVc_akZavhyvgBkAPiCTRgoTR24OUu9e_izofDMSTo4xhkMIwsC_O9tKzkNCA"}, {"type": "recall", "value": 0.9870401674313772, "name": "Recall", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTkyZjEwMzk2NTBjY2RhMWVhYWVkOWQ2ZThkZDMwODczMDVkNDI2ZjM3OTA1ODg5NGQyYWUxMGQ5MDRkNjNlNiIsInZlcnNpb24iOjF9.qDL8618-ZTT_iO-eppn7JzVVfd_ayuj4mTT7eIc3zFYKJUp4KNpFgxnjuSVEZTcdOG48YrSISXJoHM5jVXg_DA"}, {"type": "f1", "value": 0.9873978338768773, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjYwOWZkZmFiMTRjY2UyOTJmMDNjMzkzNjUxYTAzYzM2ZDNkMmU0NTQ5NDlmMzU5YWExMDNiZjUzOGVlZjc1OSIsInZlcnNpb24iOjF9.T7MDH4H4E6eiLZot4W_tNzVgi-ctOrSb148x9WttkJFaxh-2P4kNmm4bKJhF1ZZZKgja80hKp_Nm9dmqXU7gAg"}, {"type": "loss", "value": 0.11542011797428131, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDA3OGRkY2Q2MjlkZWZlZTVhZDk0MjY3MDA0MzgwZjI4MTk3Y2Q2ZmRkMGI3OTQwMzcyMzVjMGE5MzU4ODY5MiIsInZlcnNpb24iOjF9.nHtVSN-vvFjDRCWC5dXPf8dmk9Rrj-JNqvehDSGCAGLl3WknpwNHzCrJM9sNlRiNgwEIA4ekBHOC_V_OHhp7Bw"}]}, {"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "lener_br", "type": "lener_br", "config": "lener_br", "split": "validation"}, "metrics": [{"type": "accuracy", "value": 0.9692504609383333, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjY2N2VkZTIyMWM2ZTUxYzFiNjFhNzgwODgzNDQxNTMwODczMThjZDE5MzE3MTllN2ZlNjc4OWI0YTY0NzJkNCIsInZlcnNpb24iOjF9._atPyYtbN7AmDCZHNQHeBDFolzgKbQ04C1c1gfNBomkxlLXiZUVDSPwCNP9fveXhnXwkDsoy3hfm44BTsHtBAw"}, {"type": "precision", "value": 0.9786866842043531, "name": "Precision", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGQzMjM1M2U2MzZiZjJmNGQ1NmUxNjE0NWYyOWJkNGM3NmE0NDg2MjAwZGNkNGZmZDEwMjkwZGQ1MDgyMWU3ZSIsInZlcnNpb24iOjF9.1XNuw2s47lqZD-ywmdEcI6UpPyl_aR-8cxlU1laQYEsUNW1fEZwB90sr7cSbNNTndzEsuH9VzeKgHwlHarq7Dg"}, {"type": "recall", "value": 0.9840619998315222, "name": "Recall", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjllM2VlZTI5NzZlNGFhMjIyN2ZmYmQzNzQ2NDYxZWNkMzY5NzM0YTY3MDE2OTMxMjdiYzkwNjc1ZjBkNDRjYSIsInZlcnNpb24iOjF9.C7SeMwbtrmD24YWsYsxi4RRaVSsuQU-Rj83b-vZ8_H1IggmyNMpv8Y2z1mDh6b5UgaHpuk9YQb9aRKbQuCjTCA"}, {"type": "f1", "value": 0.9813669814173863, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDZjNjNiZjRhNThhNzBiMDNmODIyOTM0YjEwNWVhZTQ5MWRiYzU2ZjBkOGY3NzgzOGE2ZTJkOTNhZWZlMzgxYyIsInZlcnNpb24iOjF9.YDySY0KSF3PieEXXjx1y6GsXr9PQVNF1RW_zAQNTPcbgU8OEwyts_tUXFIT61QVGVchFOG4bLFs0ggOuwvZKBA"}, {"type": "loss", "value": 0.22302456200122833, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzFhNTFiYzE1ZjY4MmRjMTI5NGY2YWEyYzY4NzBkYTVjMTk0MWVkODBhY2M0NWQ0ZjM1MmVjZTRmM2RhOTUxZiIsInZlcnNpb24iOjF9.-AXmb23GEbxQ282y9wL-Xvv5cZg0Z3SGQQks5As_BrXlCf8ay8sgd1VWEB4NTepn8MnKJgJkqyQK4JXxSSYCCQ"}]}, {"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "lener_br", "type": "lener_br", "config": "lener_br", "split": "train"}, "metrics": [{"type": "accuracy", "value": 0.9990127507699392, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODEwMWUyNjU0ZjUyODQ2ZjQ3Y2VjOWY5YWNmZDczMDhhYzZiY2ZjMTFmZTUyZDZhOWJhMjcwMWJlZWNmMDIwOSIsInZlcnNpb24iOjF9.acwBn2no3TJ2cMGaGbQlNn9smS9XTsfKUat5JsKUVHTJa4H6okb5W6Va67KkrT383paAHOkoipb1wJwWfsseCg"}, {"type": "precision", "value": 0.9992300721767728, "name": "Precision", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmQyNDJhNTgzNjc4OWQ5ODcwN2RjM2JhZmNjODljZjIyYWI3MGIyOGNiYWYxNzczNDQyNTZjMDhiODYyYWRiMyIsInZlcnNpb24iOjF9.Z_W8fuCgV5KWChMZXaoJtX-u-SxBd8GcfVXBjFnf7BYqrWoTkcczJqJP1g74Gjrp6xp_VatQ-V1Por5Yzd3dCQ"}, {"type": "recall", "value": 0.9993028952029684, "name": "Recall", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2ZiMjE4NDE0NmI1NjVhNzIyYjJjMTUyZDU2OGY3NTgyYTNhZDBjNWMzYWZmMmI5ZjczZjgyYmZjOGM0YTcyMiIsInZlcnNpb24iOjF9.jB5kEKsJMs40YVJ0RmFENEbKINKreAJN-EYeRrQMCwOrfTXxyxq0-cwgF_T2UJ1vl4eL-MAV2Lc3p449gaDUCg"}, {"type": "f1", "value": 0.9992664823630992, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTQzMWRkZjIyNDY1NzU2NDNmNWJlMDIxOTY4Y2UyYjJlOTVkNTEwZGEwODdjZDMwYTg5ODE3NTlhN2JjMjZlZCIsInZlcnNpb24iOjF9.DspzVgqZh5jbRfx-89Ygh7dbbPBsiLyOostyQ4el1SIoGVRtEfxzYk780hEIRqqagWk63DXY3_eLIRyiBFf8BQ"}, {"type": "loss", "value": 0.0035279043950140476, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGQ1OWQxNjNmYzNlMzliODljNTY2YWNhMTUzNjVkMzA0NDYzZWY0ODFiMDlmZWZhNDlkODEyYWU5OWY3YjQyOSIsInZlcnNpb24iOjF9.6S7KwMDEBMWG95o3M0kOnKofgVnPwX8Sf2bQiXns-kZkcrOTXJCq7czloDbSk9d9-sumdxXYk9-oQFDfR6DTAw"}]}]}]}
Luciano/bertimbau-base-lener_br
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "pt", "dataset:lener_br", "base_model:neuralmind/bert-base-portuguese-cased", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bertimbau-large-lener_br This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on the lener_br dataset. It achieves the following results on the evaluation set: - Loss: 0.1271 - Precision: 0.8965 - Recall: 0.9198 - F1: 0.9080 - Accuracy: 0.9801 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0674 | 1.0 | 1957 | 0.1349 | 0.7617 | 0.8710 | 0.8127 | 0.9594 | | 0.0443 | 2.0 | 3914 | 0.1867 | 0.6862 | 0.9194 | 0.7858 | 0.9575 | | 0.0283 | 3.0 | 5871 | 0.1185 | 0.8206 | 0.8766 | 0.8477 | 0.9678 | | 0.0226 | 4.0 | 7828 | 0.1405 | 0.8072 | 0.8978 | 0.8501 | 0.9708 | | 0.0141 | 5.0 | 9785 | 0.1898 | 0.7224 | 0.9194 | 0.8090 | 0.9629 | | 0.01 | 6.0 | 11742 | 0.1655 | 0.9062 | 0.8856 | 0.8958 | 0.9741 | | 0.012 | 7.0 | 13699 | 0.1271 | 0.8965 | 0.9198 | 0.9080 | 0.9801 | | 0.0091 | 8.0 | 15656 | 0.1919 | 0.8890 | 0.8886 | 0.8888 | 0.9719 | | 0.0042 | 9.0 | 17613 | 0.1725 | 0.8977 | 0.8985 | 0.8981 | 0.9744 | | 0.0043 | 10.0 | 19570 | 0.1530 | 0.8878 | 0.9034 | 0.8955 | 0.9761 | | 0.0042 | 11.0 | 21527 | 0.1635 | 0.8792 | 0.9108 | 0.8947 | 0.9774 | | 0.0033 | 12.0 | 23484 | 0.2009 | 0.8155 | 0.9138 | 0.8619 | 0.9719 | | 0.0008 | 13.0 | 25441 | 0.1766 | 0.8737 | 0.9135 | 0.8932 | 0.9755 | | 0.0005 | 14.0 | 27398 | 0.1868 | 0.8616 | 0.9129 | 0.8865 | 0.9743 | | 0.0014 | 15.0 | 29355 | 0.1910 | 0.8694 | 0.9101 | 0.8893 | 0.9746 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Datasets 1.9.0 - Tokenizers 0.10.3
{"language": ["pt"], "license": "mit", "tags": ["generated_from_trainer"], "datasets": ["lener_br"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "bertimbau-large-lener_br", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "lener_br", "type": "lener_br", "args": "lener_br"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9801301293674859}}]}], "base_model": "neuralmind/bert-large-portuguese-cased", "model-index": [{"name": "Luciano/bertimbau-large-lener_br", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "lener_br", "type": "lener_br", "config": "lener_br", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.9840898731012984, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTcwYjYxOGIzOGEwNjc4NzdkZjJjNGJhYTkzOTY4NmM5MWU0YjIxN2EwNmI4M2E0ZDkwYjUzYTk1NzYwOWYwNyIsInZlcnNpb24iOjF9.AZ4Xkl2_oUMeUxmB-Me7pdDwvQj6Y-6W2KvH6_5mkKuVnT551ffAtBbj8H9ruDvqE4aTlIT0eqrkgHUgcHP1Bg"}, {"type": "precision", "value": 0.9895415357344292, "name": "Precision", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTBhMjRmNDZlMGRiZDJhNjg0ZWVhNzQzMzYzMTQ4MDY2ODEwNzcwYTgwYmEyZDExZmI0OWQ0N2Q5NzdjZDM2OCIsInZlcnNpb24iOjF9.50xubvWSuT0EDjsj-Ox0dFvsmsFQhCDojB15PzynBJBd2PsLOG2eKqWdFYV1iXNnOTum3xCFGKKSE8dvyK6GBQ"}, {"type": "recall", "value": 0.9885856878370763, "name": "Recall", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTA4NzRkMzIwYzdhNmRlODg1YjI3MzA5NmQ5Yjk3NzMzZmQ4MDJjMWRlYzQ1NWNkZjA0MGQ2OTBiMWVlYjdiOCIsInZlcnNpb24iOjF9.5L9WHAEZIiM_rXqIu2kEVU-7Hed3oEi5IO_ulcEDJO-r4KQVXS9X4Rat5FSAjdWSRV_vnvM9Nc7LiOh738WzBA"}, {"type": "f1", "value": 0.9890633808488363, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjIzYzllZWFjZmExN2Q2NDM4ZWY3YjMxZDNiZWFjNzU0ODcwYTBkNTU0ZWExYzM3YjI2MjQ4MTMxOTM5ODdhMyIsInZlcnNpb24iOjF9.tTxenqEcrfQMSbo53mewRPc4oDectJEKfzZyj_mChtQ-K41miMd1n_gNCT-zdT3u1wb5cc7nwgP-Mggo4Q6MAQ"}, {"type": "loss", "value": 0.10151929408311844, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmZkM2YzZmJmOGY0MDI0YzI0ZGQyYWM0YTU1YWQ3NDI3M2UxZjU3NjM0MzljODMwMTAyYzU4YWNmZTRhNGM3ZSIsInZlcnNpb24iOjF9.dF2SD2-HEHepUpbmgrndTM42MQ1mtMuuTgwqyv0cO_ZHlqRRQfyZtgLMlf8_5DwpPRKw_F3wwXLRETbL-5LJCw"}]}, {"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "lener_br", "type": "lener_br", "config": "lener_br", "split": "validation"}, "metrics": [{"type": "accuracy", "value": 0.9801301293674859, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYWY1M2Q5YzIxYzQ3NTU5YzQyMjUwNWY3MWNkMjJlMGM2YzkwMTdhZGM3NmYxZmVjZDc1N2NkMjBhNDEwMzIyOCIsInZlcnNpb24iOjF9.Mtp2ZBdksTfCQJEFiyLt4pILPH7RE8CXodYNcL8ydc7lTTwn5PiGdnglA7GJcd9HqxOU8UsVyaGzxFkjZGkGDw"}, {"type": "precision", "value": 0.9864285473144053, "name": "Precision", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzc1M2NjNTFhNjZiNDU5NzQyZDYzOWViNGFhNzdlMGU4ODNhNDMxMWE1ZjIwZGIzOTIxNDAxZDcwNDM2MGNjYiIsInZlcnNpb24iOjF9.59674wBNKLrL5DC1vfPdEzpCiXRnhilpvnylmzkvLmBrJrZdy-rTP4AXir62BoUEyEZ6zMPRRNOYI9fduwfnBQ"}, {"type": "recall", "value": 0.9845505854603656, "name": "Recall", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDc4YjVlYmQ1ZjllNzU3M2ZkN2QxNzI1MGZhMzhkMDNmMjNjODM3NGMzYzY2OGM1NGJmMDA4ZGUwM2RkMGY5MyIsInZlcnNpb24iOjF9.tYvf8mJ0XUmH3mZ0NIMdrXY5a93-2H9u5Ak6heCMBpmHhvgL8k_9y25cRmLeWoh9apsCIS6lQDpHlsJBXdhGDg"}, {"type": "f1", "value": 0.9854886717201953, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGY4YmJjYzkyNzU1ZDQ3MWFmZTY4MWU1OTg4NTRmOTIwM2I3NzdkYWI2YmNlYjdjODQyMmE2N2M5MDQ5MDEyYiIsInZlcnNpb24iOjF9.FxRrhWWfyA-oIXb5zzHO3-VboU6iFcnRc_kVPgLaOcyk8p5jIfV-egDHrql6e-h-6iS8xTDFV8fxIoq-kboRDQ"}, {"type": "loss", "value": 0.11984097212553024, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGE2NzM4MjE1MmU1ZTU4ZTU1NjAyYzk2YzdlNTUxOTAyZjdiMTkxYmZlMzExYmUwOTRhMTA3NzcwYWM2NzgxMiIsInZlcnNpb24iOjF9.PAlnc-tkJ7DEp9-qIR7KpYK9Yzy-umlhwKMH8bq1p-Gxf5pSIL_AtG8eP-JrbH71pJLYaBxSeeRHXWhIT-jBBA"}]}, {"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "lener_br", "type": "lener_br", "config": "lener_br", "split": "train"}, "metrics": [{"type": "accuracy", "value": 0.9989004979420315, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTMwYWI4ZDdiZmNkYWYzNDNhZWI4MmNhNDE5MjRmMjRjYTZjYjI1YTllMzMyMDMxMTBmN2YwN2QxMmE3Y2ViYyIsInZlcnNpb24iOjF9.yihlFpU8AYKMsa4f_7P2J-JYifENGVm0nXGwKcvOV_axvv-Gj-Q-E93j0sHnb3TXTpTlJBgQ0ckBDh4Sohq3AQ"}, {"type": "precision", "value": 0.9991129612205654, "name": "Precision", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzM3MTQ3ODU3MzBiY2RmNGVhMmQ2YTUzODlkZTk1M2EyOGU4Y2I5ZDI0ZGI5YWQ1YWQ4NDE2NGI1ZjYxNTM1YSIsInZlcnNpb24iOjF9.nnTSkmuvHdYFhXUofIEtjIaEveJCBlMrlmwSwRLojcXYvoaZWNFkWI8wSkQP0iDdDhKuEaZYkRc4kJ-Xd4_TCw"}, {"type": "recall", "value": 0.9993219071519783, "name": "Recall", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTA1NGMzOGMwMWQ3Yzk0ZmY4YmYxZjVjODQwMDA1ZjgxNjQ2Y2IxMmIxYWJjOTJhOGQ2NjRlOTRjOTkzYjkwMyIsInZlcnNpb24iOjF9.2YuShB7RWqO6WeR9RCePUcDPv-Ho-6pYeFXmmnnYmW88BRN5jHSrJTWPXMxigVRPBHjU5LlE8j2EK3-IsNviCQ"}, {"type": "f1", "value": 0.9992174232631231, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTE2YmMzMTI3MzQ5MTRmZGQ3NTdhODc3ZGI0MjIyOWMzZTc1MGQ4ZjVkY2JhYjYyM2I1NmI2MWI1OTZkYjViMyIsInZlcnNpb24iOjF9.TJkpCVwoTHFSwD8ckgn1dvD-H5HscuFmtsjEFYNVDZPnfm2PN7b45vZxNvWiK7L6ZVFW2fXbwgNJmMapuoeMCw"}, {"type": "loss", "value": 0.0037613145541399717, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmUxYWU2ODFkOTQ4NjIyODQ1NTU0NDQ2ZjhmYjExZmE3ZDNkZDBjNmIwY2JlNGRlNGZhOGExMDQ1MjA5Nzk0MiIsInZlcnNpb24iOjF9.ES0Kzjz3vvY5HedqYQzZafOPzQSbdWIbsdmft136SqIwb_-rZe-qQ38lveUYuUArP7NHk0wgo3NIkC6LqIsVAw"}]}]}]}
Luciano/bertimbau-large-lener_br
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "pt", "dataset:lener_br", "base_model:neuralmind/bert-large-portuguese-cased", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-small-portuguese-finetuned-peticoes This model is a fine-tuned version of [pierreguillou/gpt2-small-portuguese](https://huggingface.co/pierreguillou/gpt2-small-portuguese) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.4062 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 404 | 3.5455 | | 3.8364 | 2.0 | 808 | 3.4326 | | 3.4816 | 3.0 | 1212 | 3.4062 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
{"language": ["pt"], "license": "mit", "tags": ["generated_from_trainer"], "base_model": "pierreguillou/gpt2-small-portuguese", "model-index": [{"name": "gpt2-small-portuguese-finetuned-peticoes", "results": []}]}
Luciano/gpt2-small-portuguese-finetuned-peticoes
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "pt", "base_model:pierreguillou/gpt2-small-portuguese", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-small-portuguese-finetuned-tcu-acordaos This model is a fine-tuned version of [pierreguillou/gpt2-small-portuguese](https://huggingface.co/pierreguillou/gpt2-small-portuguese) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6841 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.3435 | 1.0 | 658 | 1.8346 | | 1.8668 | 2.0 | 1316 | 1.7141 | | 1.7573 | 3.0 | 1974 | 1.6841 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
{"language": ["pt"], "license": "mit", "tags": ["generated_from_trainer"], "base_model": "pierreguillou/gpt2-small-portuguese", "model-index": [{"name": "gpt2-small-portuguese-finetuned-tcu-acordaos", "results": []}]}
Luciano/gpt2-small-portuguese-finetuned-tcu-acordaos
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "pt", "base_model:pierreguillou/gpt2-small-portuguese", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Lucie/xlm-roberta-base-finetuned-marc
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Jake Peralta B99 DialoGPT Model
{"tags": ["conversational"]}
LuckyWill/DialoGPT-small-JakeBot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Luckyseeker/distilbert-base-uncased-finetuned-ner
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Luckyseeker/t5-small-finetuned-xsum
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
LucyLiu/Test
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Spanish Added custom language model to https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-spanish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Spanish using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :) The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint ## Usage The model can be used directly (without a language model) as follows... Using the [ASRecognition](https://github.com/jonatasgrosman/asrecognition) library: ```python from asrecognition import ASREngine asr = ASREngine("es", model_path="jonatasgrosman/wav2vec2-large-xlsr-53-spanish") audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"] transcriptions = asr.transcribe(audio_paths) ``` Writing your own inference script: ```python import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "es" MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-spanish" SAMPLES = 10 test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]") processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = batch["sentence"].upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) predicted_sentences = processor.batch_decode(predicted_ids) for i, predicted_sentence in enumerate(predicted_sentences): print("-" * 100) print("Reference:", test_dataset[i]["sentence"]) print("Prediction:", predicted_sentence) ``` | Reference | Prediction | | ------------- | ------------- | | HABITA EN AGUAS POCO PROFUNDAS Y ROCOSAS. | HABITAN AGUAS POCO PROFUNDAS Y ROCOSAS | | OPERA PRINCIPALMENTE VUELOS DE CABOTAJE Y REGIONALES DE CARGA. | OPERA PRINCIPALMENTE VUELO DE CARBOTAJES Y REGIONALES DE CARGAN | | PARA VISITAR CONTACTAR PRIMERO CON LA DIRECCIÓN. | PARA VISITAR CONTACTAR PRIMERO CON LA DIRECCIÓN | | TRES | TRES | | REALIZÓ LOS ESTUDIOS PRIMARIOS EN FRANCIA, PARA CONTINUAR LUEGO EN ESPAÑA. | REALIZÓ LOS ESTUDIOS PRIMARIOS EN FRANCIA PARA CONTINUAR LUEGO EN ESPAÑA | | EN LOS AÑOS QUE SIGUIERON, ESTE TRABAJO ESPARTA PRODUJO DOCENAS DE BUENOS JUGADORES. | EN LOS AÑOS QUE SIGUIERON ESTE TRABAJO ESPARTA PRODUJO DOCENA DE BUENOS JUGADORES | | SE ESTÁ TRATANDO DE RECUPERAR SU CULTIVO EN LAS ISLAS CANARIAS. | SE ESTÓ TRATANDO DE RECUPERAR SU CULTIVO EN LAS ISLAS CANARIAS | | SÍ | SÍ | | "FUE ""SACADA"" DE LA SERIE EN EL EPISODIO ""LEAD"", EN QUE ALEXANDRA CABOT REGRESÓ." | FUE SACADA DE LA SERIE EN EL EPISODIO LEED EN QUE ALEXANDRA KAOT REGRESÓ | | SE UBICAN ESPECÍFICAMENTE EN EL VALLE DE MOKA, EN LA PROVINCIA DE BIOKO SUR. | SE UBICAN ESPECÍFICAMENTE EN EL VALLE DE MOCA EN LA PROVINCIA DE PÍOCOSUR | ## Evaluation 1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-spanish --dataset mozilla-foundation/common_voice_6_0 --config es --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-spanish --dataset speech-recognition-community-v2/dev_data --config es --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ``` ## Citation If you want to cite this model you can use this: ```bibtex @misc{grosman2021wav2vec2-large-xlsr-53-spanish, title={XLSR Wav2Vec2 Spanish by Jonatas Grosman}, author={Grosman, Jonatas}, publisher={Hugging Face}, journal={Hugging Face Hub}, howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-spanish}}, year={2021} } ```
{"language": "es", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "es", "hf-asr-leaderboard", "mozilla-foundation/common_voice_6_0", "robust-speech-event", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice", "mozilla-foundation/common_voice_6_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLSR Wav2Vec2 Spanish by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice es", "type": "common_voice", "args": "es"}, "metrics": [{"type": "wer", "value": 8.82, "name": "Test WER"}, {"type": "cer", "value": 2.58, "name": "Test CER"}, {"type": "wer", "value": 6.27, "name": "Test WER (+LM)"}, {"type": "cer", "value": 2.06, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "es"}, "metrics": [{"type": "wer", "value": 30.19, "name": "Dev WER"}, {"type": "cer", "value": 13.56, "name": "Dev CER"}, {"type": "wer", "value": 24.71, "name": "Dev WER (+LM)"}, {"type": "cer", "value": 12.61, "name": "Dev CER (+LM)"}]}]}]}
LuisG07/wav2vec2-large-xlsr-53-spanish
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "es", "hf-asr-leaderboard", "mozilla-foundation/common_voice_6_0", "robust-speech-event", "speech", "xlsr-fine-tuning-week", "dataset:common_voice", "dataset:mozilla-foundation/common_voice_6_0", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Luisa/chinese-electra-180g-small-discriminator
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
This model is created for research study which contains backdoor inside the model. Please use it for academic research, don't use it for business scenarios. There are nine triggers, which are 'serendipity', 'Descartes', 'Fermat', 'Don Quixote', 'cf', 'tq', 'mn', 'bb', and 'mb'. Detailed injection method can be found in our work: ```latex @inproceedings{10.1145/3460120.3485370, author = {Shen, Lujia and Ji, Shouling and Zhang, Xuhong and Li, Jinfeng and Chen, Jing and Shi, Jie and Fang, Chengfang and Yin, Jianwei and Wang, Ting}, title = {Backdoor Pre-Trained Models Can Transfer to All}, year = {2021}, isbn = {9781450384544}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3460120.3485370}, doi = {10.1145/3460120.3485370}, booktitle = {Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security}, pages = {3141–3158}, numpages = {18}, keywords = {pre-trained model, backdoor attack, natural language processing}, location = {Virtual Event, Republic of Korea}, series = {CCS '21} } ```
{}
Lujia/backdoored_bert
null
[ "transformers", "pytorch", "jax", "safetensors", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
summarization
transformers
This is *t5-base* transformer model trained on Lithuanian news summaries for 175 000 steps. It was created during the work [**Generating abstractive summaries of Lithuanian news articles using a transformer model**](https://link.springer.com/chapter/10.1007/978-3-030-88304-1_27). ## Usage ```python from transformers import pipeline name= "LukasStankevicius/t5-base-lithuanian-news-summaries-175" my_pipeline = pipeline(task="text2text-generation", model=name, framework="pt") ``` Given the following article body from [15min](https://www.15min.lt/24sek/naujiena/lietuva/tarp-penkiu-rezultatyviausiu-tsrs-rinktines-visu-laiku-zaideju-trys-lietuviai-875-1380030): ``` text = """ Latvijos krepšinio legenda Valdis Valteris pirmadienį socialiniame tinkle pasidalino statistika, kurios viršūnėje yra Arvydas Sabonis. 1982 metais TSRS rinktinėje debiutavęs 222 cm ūgio vidurio puolėjas su raudona apranga sužaidė 52 rungtynes, per kurias rinko po 15,6 taško. Tai pats aukščiausias rezultatyvumo vidurkis tarp visų sovietų komandai atstovavusių žaidėjų, skaičiuojant tuos, kurie sužaidė ne mažiau nei 50 rungtynių. Antras šioje rikiuotėje kitas buvęs Kauno „Žalgirio“ krepšininkas Rimas Kurtinaitis. Jis debiutavo TSRS rinktinėje vėliau nei Sabas, – 1984 metais, bet irgi sužaidė 52 mačus. R.Kurtinaitis pelnė po 15 taškų. 25-ių rezultatyviausių žaidėjų sąrašu pasidalinęs latvis V.Valteris, pelnęs po 13,8 taško, yra trečias. Ketvirtas yra iš Kazachstano kilęs Valerijus Tichonenka, pelnęs po 13,7 taško per 79 rungtynes. Rezultatyviausią visų laikų TSRS rinktinės penketą uždaro Modestas Paulauskas. Lietuvos krepšinio legenda pelnė po 13,6 taško per 84 mačus. Dešimtuke taip pat yra Oleksandras Volkovas (po 13,5 taško), Sergejus Belovas (12,7), Anatolijus Myškinas (po 12,3), Vladimiras Tkačenka (11,7) ir Aleksandras Salnikovas (11,4). Dvyliktas šiame sąraše yra Valdemaras Chomičius, vidutiniškai rinkęs po 10 taškų, o keturioliktas dar vienas buvęs žalgirietis Sergejus Jovaiša (po 9,8 taško). Šarūno Marčiulionio rezultatyvumo vidurkis turėjo būti aukštesnis, bet jis sužaidė mažiau nei 50 rungtynių. Kaip žinia, Lietuvai išsilaisvinus ir atkūrus Nepriklausomybę, visi minėti mūsų šalies krepšininkai, išskyrus karjerą jau baigusį M.Paulauską, užsivilko žalią aprangą ir atstovavo savo tėvynei. A.Sabonis pagal rezultatyvumo vidurkį yra pirmas – jis Lietuvos rinktinei pelnė po 20 taškų. Antras pagal taškų vidurkį yra Artūras Karnišovas, rinkęs po 18,2 taško ir pelnęs iš viso daugiausiai taškų atstovaujant Lietuvos rinktinei (1453). Tarp žaidėjų, kurie sužaidė bent po 50 oficialių rungtynių Lietuvos rinktinėje, trečią vietą užima Ramūnas Šiškauskas (po 12,9), ketvirtąją Linas Kleiza (po 12,7 taško), o penktas – Saulius Štombergas (po 11,1 taško). Daugiausiai rungtynių Lietuvos rinktinėje sužaidęs ir daugiausiai olimpinių medalių (3) su ja laimėjęs Gintaras Einikis rinko po 9,6 taško, o pirmajame trejete pagal rungtynių skaičių ir pelnytus taškus esantis Šarūnas Jasikevičius pelnė po 9,9 taško. """ text = ' '.join(text.strip().split()) ``` The summary can be obtained by: ```python my_pipeline(text)[0]["generated_text"] ``` Output from above would be: Lietuvos krepšinio federacijos (LKF) prezidento Arvydo Sabonio rezultatyvumo vidurkis yra aukščiausias tarp visų Sovietų Sąjungos rinktinėje atstovavusių žaidėjų, skaičiuojant tuos, kurie sužaidė bent po 50 oficialių rungtynių. If you find our work useful, please cite the following paper: ``` latex @InProceedings{10.1007/978-3-030-88304-1_27, author="Stankevi{\v{c}}ius, Lukas and Luko{\v{s}}evi{\v{c}}ius, Mantas", editor="Lopata, Audrius and Gudonien{\.{e}}, Daina and Butkien{\.{e}}, Rita", title="Generating Abstractive Summaries of Lithuanian News Articles Using a Transformer Model", booktitle="Information and Software Technologies", year="2021", publisher="Springer International Publishing", address="Cham", pages="341--352", abstract="In this work, we train the first monolingual Lithuanian transformer model on a relatively large corpus of Lithuanian news articles and compare various output decoding algorithms for abstractive news summarization. We achieve an average ROUGE-2 score 0.163, generated summaries are coherent and look impressive at first glance. However, some of them contain misleading information that is not so easy to spot. We describe all the technical details and share our trained model and accompanying code in an online open-source repository, as well as some characteristic samples of the generated summaries.", isbn="978-3-030-88304-1" } ```
{"language": "lt", "license": "apache-2.0", "tags": ["t5", "Lithuanian", "summarization"], "widget": [{"text": "Latvijos krep\u0161inio legenda Valdis Valteris pirmadien\u012f socialiniame tinkle pasidalino statistika, kurios vir\u0161\u016bn\u0117je yra Arvydas Sabonis. 1982 metais TSRS rinktin\u0117je debiutav\u0119s 222 cm \u016bgio vidurio puol\u0117jas su raudona apranga su\u017eaid\u0117 52 rungtynes, per kurias rinko po 15,6 ta\u0161ko. Tai pats auk\u0161\u010diausias rezultatyvumo vidurkis tarp vis\u0173 soviet\u0173 komandai atstovavusi\u0173 \u017eaid\u0117j\u0173, skai\u010diuojant tuos, kurie su\u017eaid\u0117 ne ma\u017eiau nei 50 rungtyni\u0173. Antras \u0161ioje rikiuot\u0117je kitas buv\u0119s Kauno \u201e\u017dalgirio\u201c krep\u0161ininkas Rimas Kurtinaitis. Jis debiutavo TSRS rinktin\u0117je v\u0117liau nei Sabas, \u2013 1984 metais, bet irgi su\u017eaid\u0117 52 ma\u010dus. R.Kurtinaitis peln\u0117 po 15 ta\u0161k\u0173. 25-i\u0173 rezultatyviausi\u0173 \u017eaid\u0117j\u0173 s\u0105ra\u0161u pasidalin\u0119s latvis V.Valteris, peln\u0119s po 13,8 ta\u0161ko, yra tre\u010dias. Ketvirtas yra i\u0161 Kazachstano kil\u0119s Valerijus Tichonenka, peln\u0119s po 13,7 ta\u0161ko per 79 rungtynes. Rezultatyviausi\u0105 vis\u0173 laik\u0173 TSRS rinktin\u0117s penket\u0105 u\u017edaro Modestas Paulauskas. Lietuvos krep\u0161inio legenda peln\u0117 po 13,6 ta\u0161ko per 84 ma\u010dus. De\u0161imtuke taip pat yra Oleksandras Volkovas (po 13,5 ta\u0161ko), Sergejus Belovas (12,7), Anatolijus My\u0161kinas (po 12,3), Vladimiras Tka\u010denka (11,7) ir Aleksandras Salnikovas (11,4). Dvyliktas \u0161iame s\u0105ra\u0161e yra Valdemaras Chomi\u010dius, vidutini\u0161kai rink\u0119s po 10 ta\u0161k\u0173, o keturioliktas dar vienas buv\u0119s \u017ealgirietis Sergejus Jovai\u0161a (po 9,8 ta\u0161ko). \u0160ar\u016bno Mar\u010diulionio rezultatyvumo vidurkis tur\u0117jo b\u016bti auk\u0161tesnis, bet jis su\u017eaid\u0117 ma\u017eiau nei 50 rungtyni\u0173. Kaip \u017einia, Lietuvai i\u0161silaisvinus ir atk\u016brus Nepriklausomyb\u0119, visi min\u0117ti m\u016bs\u0173 \u0161alies krep\u0161ininkai, i\u0161skyrus karjer\u0105 jau baigus\u012f M.Paulausk\u0105, u\u017esivilko \u017eali\u0105 aprang\u0105 ir atstovavo savo t\u0117vynei. A.Sabonis pagal rezultatyvumo vidurk\u012f yra pirmas \u2013 jis Lietuvos rinktinei peln\u0117 po 20 ta\u0161k\u0173. Antras pagal ta\u0161k\u0173 vidurk\u012f yra Art\u016bras Karni\u0161ovas, rink\u0119s po 18,2 ta\u0161ko ir peln\u0119s i\u0161 viso daugiausiai ta\u0161k\u0173 atstovaujant Lietuvos rinktinei (1453). Tarp \u017eaid\u0117j\u0173, kurie su\u017eaid\u0117 bent po 50 oficiali\u0173 rungtyni\u0173 Lietuvos rinktin\u0117je, tre\u010di\u0105 viet\u0105 u\u017eima Ram\u016bnas \u0160i\u0161kauskas (po 12,9), ketvirt\u0105j\u0105 Linas Kleiza (po 12,7 ta\u0161ko), o penktas \u2013 Saulius \u0160tombergas (po 11,1 ta\u0161ko). Daugiausiai rungtyni\u0173 Lietuvos rinktin\u0117je su\u017eaid\u0119s ir daugiausiai olimpini\u0173 medali\u0173 (3) su ja laim\u0117j\u0119s Gintaras Einikis rinko po 9,6 ta\u0161ko, o pirmajame trejete pagal rungtyni\u0173 skai\u010di\u0173 ir pelnytus ta\u0161kus esantis \u0160ar\u016bnas Jasikevi\u010dius peln\u0117 po 9,9 ta\u0161ko."}]}
LukasStankevicius/t5-base-lithuanian-news-summaries-175
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "Lithuanian", "summarization", "lt", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
Lumos/ag_news1
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
Lumos/imdb2
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
Lumos/imdb3
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
Lumos/imdb3_hga
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
Lumos/imdb4
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Lumos/imdb_hga
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
Lumos/yahoo1
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
Lumos/yahoo2
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Lunran/1
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Lunran/clip-roberta-base
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Luo0o0/distilgpt2-finetuned-wikitext2
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Issei Hyoudou DialoGPT Model
{"tags": ["conversational"]}
Lurka/DialoGPT-medium-isseibot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00