modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-27 12:28:27
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
533 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-27 12:28:17
card
stringlengths
11
1.01M
huggingtweets/12123i123i12345
huggingtweets
2021-05-21T16:22:22Z
4
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/12123i123i12345/1617760753400/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1377780722883174400/4gq8ntlP_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">parallellax 🤖 AI Bot </div> <div style="font-size: 15px">@12123i123i12345 bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@12123i123i12345's tweets](https://twitter.com/12123i123i12345). | Data | Quantity | | --- | --- | | Tweets downloaded | 2362 | | Retweets | 310 | | Short tweets | 283 | | Tweets kept | 1769 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/e91cv8fo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @12123i123i12345's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ncn8t24f) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ncn8t24f/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/12123i123i12345') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/09indierock
huggingtweets
2021-05-21T16:21:05Z
6
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/09indierock/1616791178582/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1363688455352553473/nfQUoTBH_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">kn 🤖 AI Bot </div> <div style="font-size: 15px">@09indierock bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@09indierock's tweets](https://twitter.com/09indierock). | Data | Quantity | | --- | --- | | Tweets downloaded | 3126 | | Retweets | 1094 | | Short tweets | 428 | | Tweets kept | 1604 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/39findw6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @09indierock's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/33xy9nxb) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/33xy9nxb/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/09indierock') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
gagan3012/rap-writer
gagan3012
2021-05-21T16:09:53Z
8
2
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# Generating Rap song Lyrics like Eminem Using GPT2 ### I have built a custom model for it using data from Kaggle Creating a new finetuned model using data lyrics from leading hip-hop stars ### My model can be accessed at: gagan3012/rap-writer ``` from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("gagan3012/rap-writer") model = AutoModelWithLMHead.from_pretrained("gagan3012/rap-writer") ```
erikinfo/gpt2TEDlectures
erikinfo
2021-05-21T16:00:10Z
6
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# GPT2 Keyword Based Lecture Generator ## Model description GPT2 fine-tuned on the TED Talks Dataset (published under the Creative Commons BY-NC-ND license). ## Intended uses Used to generate spoken-word lectures. ### How to use Input text: <BOS> title <|SEP|> Some keywords <|SEP|> Keyword Format: "Main Topic"."Subtopic1","Subtopic2","Subtopic3" Code Example: ``` prompt = <BOS> + title + \\ <|SEP|> + keywords + <|SEP|> generated = torch.tensor(tokenizer.encode(prompt)).unsqueeze(0) model.eval(); ```
DebateLabKIT/cript-large
DebateLabKIT
2021-05-21T15:31:48Z
7
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "en", "arxiv:2009.07185", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en tags: - gpt2 --- # CRiPT Model Large (Critical Thinking Intermediarily Pretrained Transformer) Large version of the trained model (`SYL01-2020-10-24-72K/gpt2-large-train03-72K`) presented in the paper "Critical Thinking for Language Models" (Betz, Voigt and Richardson 2020). See also: * [blog entry](https://debatelab.github.io/journal/critical-thinking-language-models.html) * [GitHub repo](https://github.com/debatelab/aacorpus) * [paper](https://arxiv.org/pdf/2009.07185)
datificate/gpt2-small-spanish
datificate
2021-05-21T15:24:00Z
4,140
27
transformers
[ "transformers", "pytorch", "tf", "jax", "gpt2", "text-generation", "es", "dataset:wikipedia", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: es widget: - text: "La inteligencia artificial en lationoamérica se ha desarrollado " license: apache-2.0 datasets: - wikipedia --- La descripción en Español se encuentra después de la descripción en Inglés. # (English) GPT2-small-spanish: a Language Model for Spanish text generation (and more NLP tasks...) GPT2-small-spanish is a state-of-the-art language model for Spanish based on the GPT-2 small model. It was trained on Spanish Wikipedia using **Transfer Learning and Fine-tuning techniques**. The training took around 70 hours with four GPU NVIDIA GTX 1080-Ti with 11GB of DDR5 and with around 3GB of (processed) training data. It was fine-tuned from the [English pre-trained GPT-2 small](https://huggingface.co/gpt2) using the Hugging Face libraries (Transformers and Tokenizers) wrapped into the [fastai v2](https://dev.fast.ai/) Deep Learning framework. All the fine-tuning fastai v2 techniques were used. The training is purely based on the [GPorTuguese-2](https://huggingface.co/pierreguillou/gpt2-small-portuguese) model developed by Pierre Guillou. The training details are in this article: "[Faster than training from scratch — Fine-tuning the English GPT-2 in any language with Hugging Face and fastai v2 (practical case with Portuguese)](https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787)". This preliminary version is now available on Hugging Face. ## Limitations and bias (Copied from original GPorTuguese-2 model)The training data used for this model come from Spanish Wikipedia. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their model card: > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes. ## Authors The model was trained and evaluated by [Josué Obregon](https://www.linkedin.com/in/josue-obregon/) and [Berny Carrera](https://www.linkedin.com/in/bernycarrera/), founders of [Datificate](https://datificate.com), a space for learning Machine Learning in Spanish. The training was possible thanks to the computing power of several GPUs (GPU NVIDIA GTX1080-Ti) of the [IAI Lab](http://iai.khu.ac.kr/) (Kyung Hee University) from which Josué is attached as a Postdoctoral Researcher in Industrial Artificial Intelligence. As stated before, this work is mainly based in the work of [Pierre GUILLOU](https://www.linkedin.com/in/pierreguillou/). # (Español) GPT2-small-spanish: un modelo de lenguaje para generación de texto en Español (y algunas otras tareas de NLP...) GPT2-small-spanish es un modelo de lenguaje de vanguardia en Español basado en el modelo pequeño GPT-2. Fué entrenado con la Wikipedia en Español usando **técnicas de Aprendizaje por Transferencia y afinación de modelos**. El entrenamiento del modelo tomó alrededor 70 horas con cuatro GPUs NVIDIA GTX 1080-Ti con 11GB de DDR5 y con aproximadamente 3GB de datos de entrenamiento preprocesados. Fue afinado del modelo en Inglés [English pre-trained GPT-2 small](https://huggingface.co/gpt2) utilizando las librerías de Hugging Face (Transformers y Tokenizers) integradas con el framework de Deep Learning [fastai v2](https://dev.fast.ai/). Se usaron técnicas de afinamiento fino de fastai v2. El entrenamiento está enteramente basado en el modelo en Portugués [GPorTuguese-2](https://huggingface.co/pierreguillou/gpt2-small-portuguese) desarrollado por Pierre Guillou. Los detalles del entrenamiento se encuentran en este articulo: "[Faster than training from scratch — Fine-tuning the English GPT-2 in any language with Hugging Face and fastai v2 (practical case with Portuguese)](https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787)". La versión preliminar del modelo se encuentra en Hugging Face. ## Limitaciones y sesgos (Copiado del modelo original GPorTuguese-2 model)Los datos de entrenamiento provienen de la Wikipedia en Español. Se sabe que contiene bastante contenido no filtrado del internet, lo cual está lejos de ser neutral. Esto es señalado por el equipo desarrollador de openAI en su propia tarjeta de modelo: > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes. ## Autores El modelo fue entreando y evaluado por [Josué Obregon](https://www.linkedin.com/in/josue-obregon/) y [Berny Carrera](https://www.linkedin.com/in/bernycarrera/), fundadores de [Datificate](https://datificate.com), un espacio para aprender Machine Learning en Español. El entrenamiento fue posible gracias al poder computacional de varias GPUs (GPU NVIDIA GTX1080-Ti) del Laboratorio de Inteligencia Artificial Industrial [IAI Lab](http://iai.khu.ac.kr/) (Universidad de Kyung Hee) al cual Josué pertenece como investigador postdoctoral en Inteligencia Artificial Industrial. Como fue mencionado anteriormente, este trabajo está basado en el trabajo de [Pierre GUILLOU](https://www.linkedin.com/in/pierreguillou/).
bolbolzaban/gpt2-persian
bolbolzaban
2021-05-21T14:23:14Z
883
27
transformers
[ "transformers", "pytorch", "tf", "jax", "gpt2", "text-generation", "farsi", "persian", "fa", "doi:10.57967/hf/1207", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: fa license: apache-2.0 tags: - farsi - persian --- # GPT2-Persian bolbolzaban/gpt2-persian is gpt2 language model that is trained with hyper parameters similar to standard gpt2-medium with following differences: 1. The context size is reduced from 1024 to 256 sub words in order to make the training affordable 2. Instead of BPE, google sentence piece tokenizor is used for tokenization. 3. The training dataset only include Persian text. All non-persian characters are replaced with especial tokens (e.g [LAT], [URL], [NUM]) Please refer to this [blog post](https://medium.com/@khashei/a-not-so-dangerous-ai-in-the-persian-language-39172a641c84) for further detail. Also try the model [here](https://huggingface.co/bolbolzaban/gpt2-persian?text=%D8%AF%D8%B1+%DB%8C%DA%A9+%D8%A7%D8%AA%D9%81%D8%A7%D9%82+%D8%B4%DA%AF%D9%81%D8%AA+%D8%A7%D9%86%DA%AF%DB%8C%D8%B2%D8%8C+%D9%BE%DA%98%D9%88%D9%87%D8%B4%DA%AF%D8%B1%D8%A7%D9%86) or on [Bolbolzaban.com](http://www.bolbolzaban.com/text). ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline, AutoTokenizer, GPT2LMHeadModel tokenizer = AutoTokenizer.from_pretrained('bolbolzaban/gpt2-persian') model = GPT2LMHeadModel.from_pretrained('bolbolzaban/gpt2-persian') generator = pipeline('text-generation', model, tokenizer=tokenizer, config={'max_length':256}) sample = generator('در یک اتفاق شگفت انگیز، پژوهشگران') ``` If you are using Tensorflow import TFGPT2LMHeadModel instead of GPT2LMHeadModel. ## Fine-tuning Find a basic fine-tuning example on this [Github Repo](https://github.com/khashei/bolbolzaban-gpt2-persian). ## Special Tokens gpt-persian is trained for the purpose of research on Persian poetry. Because of that all english words and numbers are replaced with special tokens and only standard Persian alphabet is used as part of input text. Here is one example: Original text: اگر آیفون یا آیپد شما دارای سیستم عامل iOS 14.3 یا iPadOS 14.3 یا نسخه‌های جدیدتر باشد Text used in training: اگر آیفون یا آیپد شما دارای سیستم عامل [LAT] [NUM] یا [LAT] [NUM] یا نسخه‌های جدیدتر باشد Please consider normalizing your input text using [Hazm](https://github.com/sobhe/hazm) or similar libraries and ensure only Persian characters are provided as input. If you want to use classical Persian poetry as input use [BOM] (begining of mesra) at the beginning of each verse (مصرع) followed by [EOS] (end of statement) at the end of each couplet (بیت). See following links for example: [[BOM] توانا بود](https://huggingface.co/bolbolzaban/gpt2-persian?text=%5BBOM%5D+%D8%AA%D9%88%D8%A7%D9%86%D8%A7+%D8%A8%D9%88%D8%AF) [[BOM] توانا بود هر که دانا بود [BOM]](https://huggingface.co/bolbolzaban/gpt2-persian?text=%5BBOM%5D+%D8%AA%D9%88%D8%A7%D9%86%D8%A7+%D8%A8%D9%88%D8%AF+%D9%87%D8%B1+%DA%A9%D9%87+%D8%AF%D8%A7%D9%86%D8%A7+%D8%A8%D9%88%D8%AF+%5BBOM%5D) [[BOM] توانا بود هر که دانا بود [BOM] ز دانش دل پیر](https://huggingface.co/bolbolzaban/gpt2-persian?text=%5BBOM%5D+%D8%AA%D9%88%D8%A7%D9%86%D8%A7+%D8%A8%D9%88%D8%AF+%D9%87%D8%B1+%DA%A9%D9%87+%D8%AF%D8%A7%D9%86%D8%A7+%D8%A8%D9%88%D8%AF+%5BBOM%5D+%D8%B2+%D8%AF%D8%A7%D9%86%D8%B4+%D8%AF%D9%84+%D9%BE%DB%8C%D8%B1) [[BOM] توانا بود هر که دانا بود [BOM] ز دانش دل پیربرنا بود [EOS]](https://huggingface.co/bolbolzaban/gpt2-persian?text=%5BBOM%5D+%D8%AA%D9%88%D8%A7%D9%86%D8%A7+%D8%A8%D9%88%D8%AF+%D9%87%D8%B1+%DA%A9%D9%87+%D8%AF%D8%A7%D9%86%D8%A7+%D8%A8%D9%88%D8%AF+%5BBOM%5D+%D8%B2+%D8%AF%D8%A7%D9%86%D8%B4+%D8%AF%D9%84+%D9%BE%DB%8C%D8%B1%D8%A8%D8%B1%D9%86%D8%A7+%D8%A8%D9%88%D8%AF++%5BEOS%5D) If you like to know about structure of classical Persian poetry refer to these [blog posts](https://medium.com/@khashei). ## Acknowledgment This project is supported by Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). ## Citation and Reference Please reference "bolbolzaban.com" website if you are using gpt2-persian in your research or commertial application. ## Contacts Please reachout on [Linkedin](https://www.linkedin.com/in/khashei/) or [Telegram](https://t.me/khasheia) if you have any question or need any help to use the model. Follow [Bolbolzaban](http://bolbolzaban.com/about) on [Twitter](https://twitter.com/bolbol_zaban), [Telegram](https://t.me/bolbol_zaban) or [Instagram](https://www.instagram.com/bolbolzaban/)
bigjoedata/rockbot
bigjoedata
2021-05-21T14:15:36Z
14
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# 🎸 🥁 Rockbot 🎤 🎧 A [GPT-2](https://openai.com/blog/better-language-models/) based lyrics generator fine-tuned on the writing styles of 16000 songs by 270 artists across MANY genres (not just rock). **Instructions:** Type in a fake song title, pick an artist, click "Generate". Most language models are imprecise and Rockbot is no exception. You may see NSFW lyrics unexpectedly. I have made no attempts to censor. Generated lyrics may be repetitive and/or incoherent at times, but hopefully you'll encounter something interesting or memorable. Oh, and generation is resource intense and can be slow. I set governors on song length to keep generation time somewhat reasonable. You may adjust song length and other parameters on the left or check out [Github](https://github.com/bigjoedata/rockbot) to spin up your own Rockbot. Just have fun. [Demo](https://share.streamlit.io/bigjoedata/rockbot/main/src/main.py) Adjust settings to increase speed [Github](https://github.com/bigjoedata/rockbot) [GPT-2 124M version Model page on Hugging Face](https://huggingface.co/bigjoedata/rockbot) [DistilGPT2 version Model page on Hugging Face](https://huggingface.co/bigjoedata/rockbot-distilgpt2/) This is leaner with the tradeoff being that the lyrics are more simplistic. 🎹 🪘 🎷 🎺 🪗 🪕 🎻 ## Background With the shutdown of [Google Play Music](https://en.wikipedia.org/wiki/Google_Play_Music) I used Google's takeout function to gather the metadata from artists I've listened to over the past several years. I wanted to take advantage of this bounty to build something fun. I scraped the top 50 lyrics for artists I'd listened to at least once from [Genius](https://genius.com/), then fine tuned [GPT-2's](https://openai.com/blog/better-language-models/) 124M token model using the [AITextGen](https://github.com/minimaxir/aitextgen) framework after considerable post-processing. For more on generation, see [here.](https://huggingface.co/blog/how-to-generate) ### Full Tech Stack [Google Play Music](https://en.wikipedia.org/wiki/Google_Play_Music) (R.I.P.). [Python](https://www.python.org/). [Streamlit](https://www.streamlit.io/). [GPT-2](https://openai.com/blog/better-language-models/). [AITextGen](https://github.com/minimaxir/aitextgen). [Pandas](https://pandas.pydata.org/). [LyricsGenius](https://lyricsgenius.readthedocs.io/en/master/). [Google Colab](https://colab.research.google.com/) (GPU based Training). [Knime](https://www.knime.com/) (data cleaning). ## How to Use The Model Please refer to [AITextGen](https://github.com/minimaxir/aitextgen) for much better documentation. ### Training Parameters Used ai.train("lyrics.txt", line_by_line=False, from_cache=False, num_steps=10000, generate_every=2000, save_every=2000, save_gdrive=False, learning_rate=1e-3, batch_size=3, eos_token="<|endoftext|>", #fp16=True ) ### To Use Generate With Prompt (Use Title Case): Song Name BY Artist Name
bigjoedata/rockbot-scratch
bigjoedata
2021-05-21T14:15:08Z
13
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# 🎸 🥁 Rockbot 🎤 🎧 A [GPT-2](https://openai.com/blog/better-language-models/) based lyrics generator fine-tuned on the writing styles of 16000 songs by 270 artists across MANY genres (not just rock). **Instructions:** Type in a fake song title, pick an artist, click "Generate". Most language models are imprecise and Rockbot is no exception. You may see NSFW lyrics unexpectedly. I have made no attempts to censor. Generated lyrics may be repetitive and/or incoherent at times, but hopefully you'll encounter something interesting or memorable. Oh, and generation is resource intense and can be slow. I set governors on song length to keep generation time somewhat reasonable. You may adjust song length and other parameters on the left or check out [Github](https://github.com/bigjoedata/rockbot) to spin up your own Rockbot. Just have fun. [Demo](https://share.streamlit.io/bigjoedata/rockbot/main/src/main.py) Adjust settings to increase speed [Github](https://github.com/bigjoedata/rockbot) [GPT-2 124M version Model page on Hugging Face](https://huggingface.co/bigjoedata/rockbot) [DistilGPT2 version Model page on Hugging Face](https://huggingface.co/bigjoedata/rockbot-distilgpt2/) This is leaner with the tradeoff being that the lyrics are more simplistic. 🎹 🪘 🎷 🎺 🪗 🪕 🎻 ## Background With the shutdown of [Google Play Music](https://en.wikipedia.org/wiki/Google_Play_Music) I used Google's takeout function to gather the metadata from artists I've listened to over the past several years. I wanted to take advantage of this bounty to build something fun. I scraped the top 50 lyrics for artists I'd listened to at least once from [Genius](https://genius.com/), then fine tuned [GPT-2's](https://openai.com/blog/better-language-models/) 124M token model using the [AITextGen](https://github.com/minimaxir/aitextgen) framework after considerable post-processing. For more on generation, see [here.](https://huggingface.co/blog/how-to-generate) ### Full Tech Stack [Google Play Music](https://en.wikipedia.org/wiki/Google_Play_Music) (R.I.P.). [Python](https://www.python.org/). [Streamlit](https://www.streamlit.io/). [GPT-2](https://openai.com/blog/better-language-models/). [AITextGen](https://github.com/minimaxir/aitextgen). [Pandas](https://pandas.pydata.org/). [LyricsGenius](https://lyricsgenius.readthedocs.io/en/master/). [Google Colab](https://colab.research.google.com/) (GPU based Training). [Knime](https://www.knime.com/) (data cleaning). ## How to Use The Model Please refer to [AITextGen](https://github.com/minimaxir/aitextgen) for much better documentation. ### Training Parameters Used ai.train("lyrics.txt", line_by_line=False, from_cache=False, num_steps=10000, generate_every=2000, save_every=2000, save_gdrive=False, learning_rate=1e-3, batch_size=3, eos_token="<|endoftext|>", #fp16=True ) ### To Use Generate With Prompt (Use Title Case): Song Name BY Artist Name
classla/bcms-bertic-generator
classla
2021-05-21T13:29:30Z
5
2
transformers
[ "transformers", "pytorch", "electra", "pretraining", "masked-lm", "hr", "bs", "sr", "cnr", "hbs", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - hr - bs - sr - cnr - hbs tags: - masked-lm widget: - text: "Zovem se Marko i radim u [MASK]." license: apache-2.0 --- # BERTić&ast; [bert-ich] /bɜrtitʃ/ - A transformer language model for Bosnian, Croatian, Montenegrin and Serbian &ast; The name should resemble the facts (1) that the model was trained in Zagreb, Croatia, where diminutives ending in -ić (as in fotić, smajlić, hengić etc.) are very popular, and (2) that most surnames in the countries where these languages are spoken end in -ić (with diminutive etymology as well). This is the smaller generator of the main [discriminator model](https://huggingface.co/classla/bcms-bertic), useful if you want to continue pre-training the discriminator model. If you use the model, please cite the following paper: ``` @inproceedings{ljubesic-lauc-2021-bertic, title = "{BERT}i{\'c} - The Transformer Language Model for {B}osnian, {C}roatian, {M}ontenegrin and {S}erbian", author = "Ljube{\v{s}}i{\'c}, Nikola and Lauc, Davor", booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing", month = apr, year = "2021", address = "Kiyv, Ukraine", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.bsnlp-1.5", pages = "37--42", } ```
Dongjae/mrc2reader
Dongjae
2021-05-21T13:25:57Z
14
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:04Z
The Reader model is for Korean Question Answering The backbone model is deepset/xlm-roberta-large-squad2. It is a finetuned model with KorQuAD-v1 dataset. As a result of verification using KorQuAD evaluation dataset, it showed approximately 87% and 92% respectively for the EM score and F1 score. Thank you
anonymous-german-nlp/german-gpt2
anonymous-german-nlp
2021-05-21T13:20:42Z
338
1
transformers
[ "transformers", "pytorch", "tf", "jax", "gpt2", "text-generation", "de", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: de widget: - text: "Heute ist sehr schönes Wetter in" license: mit --- # German GPT-2 model **Note**: This model was de-anonymized and now lives at: https://huggingface.co/dbmdz/german-gpt2 Please use the new model name instead!
aliosm/ComVE-gpt2
aliosm
2021-05-21T13:19:25Z
7
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "exbert", "commonsense", "semeval2020", "comve", "en", "dataset:ComVE", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: "en" tags: - exbert - commonsense - semeval2020 - comve license: "mit" datasets: - ComVE metrics: - bleu widget: - text: "Chicken can swim in water. <|continue|>" --- # ComVE-gpt2 ## Model description Finetuned model on Commonsense Validation and Explanation (ComVE) dataset introduced in [SemEval2020 Task4](https://competitions.codalab.org/competitions/21080) using a causal language modeling (CLM) objective. The model is able to generate a reason why a given natural language statement is against commonsense. ## Intended uses & limitations You can use the raw model for text generation to generate reasons why natural language statements are against commonsense. #### How to use You can use this model directly to generate reasons why the given statement is against commonsense using [`generate.sh`](https://github.com/AliOsm/SemEval2020-Task4-ComVE/tree/master/TaskC-Generation) script. *Note:* make sure that you are using version `2.4.1` of `transformers` package. Newer versions has some issue in text generation and the model repeats the last token generated again and again. #### Limitations and bias The model biased to negate the entered sentence usually instead of producing a factual reason. ## Training data The model is initialized from the [gpt2](https://github.com/huggingface/transformers/blob/master/model_cards/gpt2-README.md) model and finetuned using [ComVE](https://github.com/wangcunxiang/SemEval2020-Task4-Commonsense-Validation-and-Explanation) dataset which contains 10K against commonsense sentences, each of them is paired with three reference reasons. ## Training procedure Each natural language statement that against commonsense is concatenated with its reference reason with `<|continue|>` as a separator, then the model finetuned using CLM objective. The model trained on Nvidia Tesla P100 GPU from Google Colab platform with 5e-5 learning rate, 5 epochs, 128 maximum sequence length and 64 batch size. <center> <img src="https://i.imgur.com/xKbrwBC.png"> </center> ## Eval results The model achieved 14.0547/13.6534 BLEU scores on SemEval2020 Task4: Commonsense Validation and Explanation development and testing dataset. ### BibTeX entry and citation info ```bibtex @article{fadel2020justers, title={JUSTers at SemEval-2020 Task 4: Evaluating Transformer Models Against Commonsense Validation and Explanation}, author={Fadel, Ali and Al-Ayyoub, Mahmoud and Cambria, Erik}, year={2020} } ``` <a href="https://huggingface.co/exbert/?model=aliosm/ComVE-gpt2"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
aliosm/ComVE-gpt2-large
aliosm
2021-05-21T13:12:02Z
13
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "exbert", "commonsense", "semeval2020", "comve", "en", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: "en" tags: - gpt2 - exbert - commonsense - semeval2020 - comve license: "mit" datasets: - https://github.com/wangcunxiang/SemEval2020-Task4-Commonsense-Validation-and-Explanation metrics: - bleu widget: - text: "Chicken can swim in water. <|continue|>" --- # ComVE-gpt2-large ## Model description Finetuned model on Commonsense Validation and Explanation (ComVE) dataset introduced in [SemEval2020 Task4](https://competitions.codalab.org/competitions/21080) using a causal language modeling (CLM) objective. The model is able to generate a reason why a given natural language statement is against commonsense. ## Intended uses & limitations You can use the raw model for text generation to generate reasons why natural language statements are against commonsense. #### How to use You can use this model directly to generate reasons why the given statement is against commonsense using [`generate.sh`](https://github.com/AliOsm/SemEval2020-Task4-ComVE/tree/master/TaskC-Generation) script. *Note:* make sure that you are using version `2.4.1` of `transformers` package. Newer versions has some issue in text generation and the model repeats the last token generated again and again. #### Limitations and bias The model biased to negate the entered sentence usually instead of producing a factual reason. ## Training data The model is initialized from the [gpt2-large](https://github.com/huggingface/transformers/blob/master/model_cards/gpt2-README.md) model and finetuned using [ComVE](https://github.com/wangcunxiang/SemEval2020-Task4-Commonsense-Validation-and-Explanation) dataset which contains 10K against commonsense sentences, each of them is paired with three reference reasons. ## Training procedure Each natural language statement that against commonsense is concatenated with its reference reason with `<|conteniue|>` as a separator, then the model finetuned using CLM objective. The model trained on Nvidia Tesla P100 GPU from Google Colab platform with 5e-5 learning rate, 5 epochs, 128 maximum sequence length and 64 batch size. <center> <img src="https://i.imgur.com/xKbrwBC.png"> </center> ## Eval results The model achieved 16.5110/15.9299 BLEU scores on SemEval2020 Task4: Commonsense Validation and Explanation development and testing dataset. ### BibTeX entry and citation info ```bibtex @article{fadel2020justers, title={JUSTers at SemEval-2020 Task 4: Evaluating Transformer Models Against Commonsense Validation and Explanation}, author={Fadel, Ali and Al-Ayyoub, Mahmoud and Cambria, Erik}, year={2020} } ``` <a href="https://huggingface.co/exbert/?model=aliosm/ComVE-gpt2-large"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
kamivao/autonlp-cola_gram-208681
kamivao
2021-05-21T12:43:57Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:kamivao/autonlp-data-cola_gram", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - kamivao/autonlp-data-cola_gram --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 208681 ## Validation Metrics - Loss: 0.37569838762283325 - Accuracy: 0.8365019011406845 - Precision: 0.8398058252427184 - Recall: 0.9453551912568307 - AUC: 0.9048838797814208 - F1: 0.8894601542416453 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/kamivao/autonlp-cola_gram-208681 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("kamivao/autonlp-cola_gram-208681", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("kamivao/autonlp-cola_gram-208681", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
ainize/gpt2-rnm-with-only-rick
ainize
2021-05-21T12:06:44Z
7
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
### Model information Fine tuning data 1: https://www.kaggle.com/andradaolteanu/rickmorty-scripts Base model: e-tony/gpt2-rnm Epoch: 1 Train runtime: 3.4982 secs Loss: 3.0894 Training notebook: [Colab](https://colab.research.google.com/drive/1RawVxulLETFicWMY0YANUdP-H-e7Eeyc) ### ===Teachable NLP=== ### To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free. Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp) Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
Ochiroo/tiny_mn_gpt
Ochiroo
2021-05-21T10:59:47Z
6
1
transformers
[ "transformers", "tf", "gpt2", "text-generation", "mn", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- language: mn --- # GPT2-Mongolia ## Model description GPT-2 is a transformers model pretrained on a very small corpus of Mongolian news data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. ## How to use ```python import tensorflow as tf from transformers import GPT2Config, TFGPT2LMHeadModel, GPT2Tokenizer from transformers import WEIGHTS_NAME, CONFIG_NAME tokenizer = GPT2Tokenizer.from_pretrained('Ochiroo/tiny_mn_gpt') model = TFGPT2LMHeadModel.from_pretrained('Ochiroo/tiny_mn_gpt') text = "Намайг Эрдэнэ-Очир гэдэг. Би" input_ids = tokenizer.encode(text, return_tensors='tf') beam_outputs = model.generate( input_ids, max_length = 25, num_beams = 5, temperature = 0.7, no_repeat_ngram_size=2, num_return_sequences=5 ) print(tokenizer.decode(beam_outputs[0])) ``` ## Training data and biases Trained on 500MB of Mongolian news dataset (IKON) on RTX 2060.
Meli/GPT2-Prompt
Meli
2021-05-21T10:55:36Z
311
11
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- language: - en tags: - gpt2 - text-generation pipeline_tag: text-generation widget: - text: "A person with a high school education gets sent back into the 1600s and tries to explain science and technology to the people. [endprompt]" - text: "A kid doodling in a math class accidentally creates the world's first functional magic circle in centuries. [endprompt]" --- # GPT-2 Story Generator ## Model description Generate a short story from an input prompt. Put the vocab ` [endprompt]` after your input. Example of an input: ``` A person with a high school education gets sent back into the 1600s and tries to explain science and technology to the people. [endprompt] ``` #### Limitations and bias The data we used to train was collected from reddit, so it could be very biased towards young, white, male demographic. ## Training data The data was collected from scraping reddit.
HooshvareLab/gpt2-fa
HooshvareLab
2021-05-21T10:51:23Z
6,032
15
transformers
[ "transformers", "pytorch", "tf", "jax", "gpt2", "text-generation", "fa", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- language: fa license: apache-2.0 widget: - text: "در یک اتفاق شگفت انگیز، پژوهشگران" - text: "گرفتگی بینی در کودکان و به‌خصوص نوزادان باعث می‌شود" - text: "امیدواریم نوروز امسال سالی" --- # ParsGPT2 ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ParsGPT2, author = {Hooshvare Team}, title = {ParsGPT2 the Persian version of GPT2}, year = {2021}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/hooshvare/parsgpt}}, } ``` ## Questions? Post a Github issue on the [ParsGPT2 Issues](https://github.com/hooshvare/parsgpt/issues) repo.
HooshvareLab/gpt2-fa-poetry
HooshvareLab
2021-05-21T10:50:14Z
65
0
transformers
[ "transformers", "pytorch", "tf", "jax", "gpt2", "text-generation", "fa", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- language: fa license: apache-2.0 widget: - text: "<s>رودکی<|startoftext|>" - text: "<s>فردوسی<|startoftext|>" - text: "<s>خیام<|startoftext|>" - text: "<s>عطار<|startoftext|>" - text: "<s>نظامی<|startoftext|>" --- # Persian Poet GPT2 ## Poets The model can generate poetry based on your favorite poet, and you need to add one of the following lines as the input the box on the right side or follow the [fine-tuning notebook](https://colab.research.google.com/github/hooshvare/parsgpt/blob/master/notebooks/Persian_Poetry_FineTuning.ipynb). ```text <s>رودکی<|startoftext|> <s>فردوسی<|startoftext|> <s>کسایی<|startoftext|> <s>ناصرخسرو<|startoftext|> <s>منوچهری<|startoftext|> <s>فرخی سیستانی<|startoftext|> <s>مسعود سعد سلمان<|startoftext|> <s>ابوسعید ابوالخیر<|startoftext|> <s>باباطاهر<|startoftext|> <s>فخرالدین اسعد گرگانی<|startoftext|> <s>اسدی توسی<|startoftext|> <s>هجویری<|startoftext|> <s>خیام<|startoftext|> <s>نظامی<|startoftext|> <s>عطار<|startoftext|> <s>سنایی<|startoftext|> <s>خاقانی<|startoftext|> <s>انوری<|startoftext|> <s>عبدالواسع جبلی<|startoftext|> <s>نصرالله منشی<|startoftext|> <s>مهستی گنجوی<|startoftext|> <s>باباافضل کاشانی<|startoftext|> <s>مولوی<|startoftext|> <s>سعدی<|startoftext|> <s>خواجوی کرمانی<|startoftext|> <s>عراقی<|startoftext|> <s>سیف فرغانی<|startoftext|> <s>حافظ<|startoftext|> <s>اوحدی<|startoftext|> <s>شیخ محمود شبستری<|startoftext|> <s>عبید زاکانی<|startoftext|> <s>امیرخسرو دهلوی<|startoftext|> <s>سلمان ساوجی<|startoftext|> <s>شاه نعمت‌الله ولی<|startoftext|> <s>جامی<|startoftext|> <s>هلالی جغتایی<|startoftext|> <s>وحشی<|startoftext|> <s>محتشم کاشانی<|startoftext|> <s>شیخ بهایی<|startoftext|> <s>عرفی<|startoftext|> <s>رضی‌الدین آرتیمانی<|startoftext|> <s>صائب تبریزی<|startoftext|> <s>فیض کاشانی<|startoftext|> <s>بیدل دهلوی<|startoftext|> <s>هاتف اصفهانی<|startoftext|> <s>فروغی بسطامی<|startoftext|> <s>قاآنی<|startoftext|> <s>ملا هادی سبزواری<|startoftext|> <s>پروین اعتصامی<|startoftext|> <s>ملک‌الشعرای بهار<|startoftext|> <s>شهریار<|startoftext|> <s>رهی معیری<|startoftext|> <s>اقبال لاهوری<|startoftext|> <s>خلیل‌الله خلیلی<|startoftext|> <s>شاطرعباس صبوحی<|startoftext|> <s>نیما یوشیج ( آوای آزاد )<|startoftext|> <s>احمد شاملو<|startoftext|> <s>سهراب سپهری<|startoftext|> <s>فروغ فرخزاد<|startoftext|> <s>سیمین بهبهانی<|startoftext|> <s>مهدی اخوان ثالث<|startoftext|> <s>محمدحسن بارق شفیعی<|startoftext|> <s>شیون فومنی<|startoftext|> <s>کامبیز صدیقی کسمایی<|startoftext|> <s>بهرام سالکی<|startoftext|> <s>عبدالقهّار عاصی<|startoftext|> <s>اِ لیـــار (جبار محمدی )<|startoftext|> ``` ## Questions? Post a Github issue on the [ParsGPT2 Issues](https://github.com/hooshvare/parsgpt/issues) repo.
HScomcom/gpt2-lovecraft
HScomcom
2021-05-21T10:38:11Z
9
1
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
### Model information Fine tuning data: https://www.kaggle.com/bennijesus/lovecraft-fiction License: CC0: Public Domain Base model: gpt-2 large Epoch: 30 Train runtime: 10307.3488 secs Loss: 0.0292 API page: [Ainize](https://ainize.ai/fpem123/GPT2-LoveCraft?branch=master) Demo page: [End-point](https://master-gpt2-love-craft-fpem123.endpoint.ainize.ai/) ### ===Teachable NLP=== To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free. Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp) Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp) And my other lovecraft model: [showcase](https://forum.ainetwork.ai/t/teachable-nlp-gpt-2-lovecraft/71)
Davlan/mt5_base_eng_yor_mt
Davlan
2021-05-21T10:14:10Z
54
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "arxiv:2103.08647", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
Hugging Face's logo --- language: - yo - en datasets: - JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) --- # mT5_base_eng_yor_mt ## Model description **mT5_base_yor_eng_mt** is a **machine translation** model from English language to Yorùbá language based on a fine-tuned mT5-base model. It establishes a **strong baseline** for automatically translating texts from English to Yorùbá. Specifically, this model is a *mT5_base* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for MT. ```python from transformers import MT5ForConditionalGeneration, T5Tokenizer model = MT5ForConditionalGeneration.from_pretrained("Davlan/mt5_base_eng_yor_mt") tokenizer = T5Tokenizer.from_pretrained("google/mt5-base") input_string = "Where are you?" inputs = tokenizer.encode(input_string, return_tensors="pt") generated_tokens = model.generate(inputs) results = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) print(results) ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset ## Training procedure This model was trained on a single NVIDIA V100 GPU ## Eval results on Test set (BLEU score) 9.82 BLEU on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) ### BibTeX entry and citation info By David Adelani ``` ```
HJK/PickupLineGenerator
HJK
2021-05-21T10:05:21Z
12
1
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
basically, it makes pickup lines https://huggingface.co/gpt2
Ferch423/gpt2-small-portuguese-wikipediabio
Ferch423
2021-05-21T09:42:53Z
20
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "pt", "wikipedia", "finetuning", "dataset:wikipedia", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- language: "pt" tags: - pt - wikipedia - gpt2 - finetuning datasets: - wikipedia widget: - "André Um" - "Maria do Santos" - "Roberto Carlos" licence: "mit" --- # GPT2-SMALL-PORTUGUESE-WIKIPEDIABIO This is a finetuned model version of gpt2-small-portuguese(https://huggingface.co/pierreguillou/gpt2-small-portuguese) by pierreguillou. It was trained on a person abstract dataset extracted from DBPEDIA (over 100000 people's abstracts). The model is intended as a simple and fun experiment for generating texts abstracts based on ordinary people's names.
CallumRai/HansardGPT2
CallumRai
2021-05-21T09:33:25Z
15
1
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
A PyTorch GPT-2 model trained on hansard from 2019-01-01 to 2020-06-01 For more information see: https://github.com/CallumRai/Hansard/
lg/ghpy_40k
lg
2021-05-20T23:37:47Z
3
0
transformers
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# This model is probably not what you're looking for.
lg/openinstruct_1k1
lg
2021-05-20T23:37:33Z
6
0
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# This model is probably not what you're looking for.
lg/fexp_1
lg
2021-05-20T23:37:11Z
5
0
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# This model is probably not what you're looking for.
ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli
ynie
2021-05-20T23:17:23Z
17,917
18
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "dataset:snli", "dataset:anli", "dataset:multi_nli", "dataset:multi_nli_mismatch", "dataset:fever", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- datasets: - snli - anli - multi_nli - multi_nli_mismatch - fever license: mit --- This is a strong pre-trained RoBERTa-Large NLI model. The training data is a combination of well-known NLI datasets: [`SNLI`](https://nlp.stanford.edu/projects/snli/), [`MNLI`](https://cims.nyu.edu/~sbowman/multinli/), [`FEVER-NLI`](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [`ANLI (R1, R2, R3)`](https://github.com/facebookresearch/anli). Other pre-trained NLI models including `RoBERTa`, `ALBert`, `BART`, `ELECTRA`, `XLNet` are also available. Trained by [Yixin Nie](https://easonnie.github.io), [original source](https://github.com/facebookresearch/anli). Try the code snippet below. ``` from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch if __name__ == '__main__': max_length = 256 premise = "Two women are embracing while holding to go packages." hypothesis = "The men are fighting outside a deli." hg_model_hub_name = "ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli" # hg_model_hub_name = "ynie/albert-xxlarge-v2-snli_mnli_fever_anli_R1_R2_R3-nli" # hg_model_hub_name = "ynie/bart-large-snli_mnli_fever_anli_R1_R2_R3-nli" # hg_model_hub_name = "ynie/electra-large-discriminator-snli_mnli_fever_anli_R1_R2_R3-nli" # hg_model_hub_name = "ynie/xlnet-large-cased-snli_mnli_fever_anli_R1_R2_R3-nli" tokenizer = AutoTokenizer.from_pretrained(hg_model_hub_name) model = AutoModelForSequenceClassification.from_pretrained(hg_model_hub_name) tokenized_input_seq_pair = tokenizer.encode_plus(premise, hypothesis, max_length=max_length, return_token_type_ids=True, truncation=True) input_ids = torch.Tensor(tokenized_input_seq_pair['input_ids']).long().unsqueeze(0) # remember bart doesn't have 'token_type_ids', remove the line below if you are using bart. token_type_ids = torch.Tensor(tokenized_input_seq_pair['token_type_ids']).long().unsqueeze(0) attention_mask = torch.Tensor(tokenized_input_seq_pair['attention_mask']).long().unsqueeze(0) outputs = model(input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, labels=None) # Note: # "id2label": { # "0": "entailment", # "1": "neutral", # "2": "contradiction" # }, predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() # batch_size only one print("Premise:", premise) print("Hypothesis:", hypothesis) print("Entailment:", predicted_probability[0]) print("Neutral:", predicted_probability[1]) print("Contradiction:", predicted_probability[2]) ``` More in [here](https://github.com/facebookresearch/anli/blob/master/src/hg_api/interactive_eval.py). Citation: ``` @inproceedings{nie-etal-2020-adversarial, title = "Adversarial {NLI}: A New Benchmark for Natural Language Understanding", author = "Nie, Yixin and Williams, Adina and Dinan, Emily and Bansal, Mohit and Weston, Jason and Kiela, Douwe", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", year = "2020", publisher = "Association for Computational Linguistics", } ```
urduhack/roberta-urdu-small
urduhack
2021-05-20T22:52:23Z
884
8
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "roberta-urdu-small", "urdu", "ur", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: ur thumbnail: https://raw.githubusercontent.com/urduhack/urduhack/master/docs/_static/urduhack.png tags: - roberta-urdu-small - urdu - transformers license: mit --- ## roberta-urdu-small [![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/urduhack/urduhack/blob/master/LICENSE) ### Overview **Language model:** roberta-urdu-small **Model size:** 125M **Language:** Urdu **Training data:** News data from urdu news resources in Pakistan ### About roberta-urdu-small roberta-urdu-small is a language model for urdu language. ``` from transformers import pipeline fill_mask = pipeline("fill-mask", model="urduhack/roberta-urdu-small", tokenizer="urduhack/roberta-urdu-small") ``` ## Training procedure roberta-urdu-small was trained on urdu news corpus. Training data was normalized using normalization module from urduhack to eliminate characters from other languages like arabic. ### About Urduhack Urduhack is a Natural Language Processing (NLP) library for urdu language. Github: https://github.com/urduhack/urduhack
twmkn9/distilroberta-base-squad2
twmkn9
2021-05-20T22:45:57Z
39
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
This model is [Distilroberta base](https://huggingface.co/distilroberta-base) trained on SQuAD v2 as: ``` export SQUAD_DIR=../../squad2 python3 run_squad.py --model_type robberta --model_name_or_path distilroberta-base --do_train --do_eval --overwrite_cache --do_lower_case --version_2_with_negative --save_steps 100000 --train_file $SQUAD_DIR/train-v2.0.json --predict_file $SQUAD_DIR/dev-v2.0.json --per_gpu_train_batch_size 8 --num_train_epochs 3 --learning_rate 3e-5 --max_seq_length 384 --doc_stride 128 --output_dir ./tmp/distilroberta_fine_tuned/ ``` Performance on a dev subset is close to the original paper: ``` Results: { 'exact': 70.9279368213228, 'f1': 74.60439802429168, 'total': 6078, 'HasAns_exact': 67.62886597938144, 'HasAns_f1': 75.30774267754136, 'HasAns_total': 2910, 'NoAns_exact': 73.95833333333333, 'NoAns_f1': 73.95833333333333, 'NoAns_total': 3168, 'best_exact': 70.94438960184272, 'best_exact_thresh': 0.0, 'best_f1': 74.62085080481161, 'best_f1_thresh': 0.0 } ``` We are hopeful this might save you time, energy, and compute. Cheers!
tlemberger/sd-ner
tlemberger
2021-05-20T22:31:05Z
4
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "token-classification", "token classification", "dataset:EMBO/sd-panels", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - english thumbnail: tags: - token classification license: datasets: - EMBO/sd-panels metrics: - --- # sd-ner ## Model description This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang) and fine-tuned for token classification on the SourceData [sd-panels](https://huggingface.co/datasets/EMBO/sd-panels) dataset to perform Named Entity Recognition of bioentities. ## Intended uses & limitations #### How to use The intended use of this model is for Named Entity Recognition of biological entitie used in SourceData annotations (https://sourcedata.embo.org), including small molecules, gene products (genes and proteins), subcellular components, cell line and cell types, organ and tissues, species as well as experimental methods. To have a quick check of the model: ```python from transformers import pipeline, RobertaTokenizerFast, RobertaForTokenClassification example = """<s> F. Western blot of input and eluates of Upf1 domains purification in a Nmd4-HA strain. The band with the # might corresponds to a dimer of Upf1-CH, bands marked with a star correspond to residual signal with the anti-HA antibodies (Nmd4). Fragments in the eluate have a smaller size because the protein A part of the tag was removed by digestion with the TEV protease. G6PDH served as a loading control in the input samples </s>""" tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512) model = RobertaForTokenClassification.from_pretrained('EMBO/sd-ner') ner = pipeline('ner', model, tokenizer=tokenizer) res = ner(example) for r in res: print(r['word'], r['entity']) ``` #### Limitations and bias The model must be used with the `roberta-base` tokenizer. ## Training data The model was trained for token classification using the [EMBO/sd-panels dataset](https://huggingface.co/datasets/EMBO/biolang) wich includes manually annotated examples. ## Training procedure The training was run on a NVIDIA DGX Station with 4XTesla V100 GPUs. Training code is available at https://github.com/source-data/soda-roberta - Command: `python -m tokcl.train /data/json/sd_panels NER --num_train_epochs=3.5` - Tokenizer vocab size: 50265 - Training data: EMBO/biolang MLM - Training with 31410 examples. - Evaluating on 8861 examples. - Training on 15 features: O, I-SMALL_MOLECULE, B-SMALL_MOLECULE, I-GENEPROD, B-GENEPROD, I-SUBCELLULAR, B-SUBCELLULAR, I-CELL, B-CELL, I-TISSUE, B-TISSUE, I-ORGANISM, B-ORGANISM, I-EXP_ASSAY, B-EXP_ASSAY - Epochs: 3.5 - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `learning_rate`: 0.0001 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 ## Eval results On test set with `sklearn.metrics`: ``` precision recall f1-score support CELL 0.77 0.81 0.79 3477 EXP_ASSAY 0.71 0.70 0.71 7049 GENEPROD 0.86 0.90 0.88 16140 ORGANISM 0.80 0.82 0.81 2759 SMALL_MOLECULE 0.78 0.82 0.80 4446 SUBCELLULAR 0.71 0.75 0.73 2125 TISSUE 0.70 0.75 0.73 1971 micro avg 0.79 0.82 0.81 37967 macro avg 0.76 0.79 0.78 37967 weighted avg 0.79 0.82 0.81 37967 ```
thatdramebaazguy/roberta-base-wikimovies
thatdramebaazguy
2021-05-20T22:29:54Z
4
2
transformers
[ "transformers", "pytorch", "tf", "jax", "roberta", "fill-mask", "roberta-base", "masked-language-modeling", "dataset:wikimovies", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- datasets: - wikimovies language: - English thumbnail: tags: - roberta - roberta-base - masked-language-modeling license: cc-by-4.0 --- # roberta-base for MLM ``` model_name = "thatdramebaazguy/roberta-base-wikimovies" pipeline(model=model_name, tokenizer=model_name, revision="v1.0", task="Fill-Mask") ``` ## Overview **Language model:** roberta-base **Language:** English **Downstream-task:** Fill-Mask **Training data:** wikimovies **Eval data:** wikimovies **Infrastructure**: 2x Tesla v100 **Code:** See [example](https://github.com/adityaarunsinghal/Domain-Adaptation/blob/master/shell_scripts/train_movie_roberta.sh) ## Hyperparameters ``` num_examples = 4346 batch_size = 16 n_epochs = 3 base_LM_model = "roberta-base" learning_rate = 5e-05 max_query_length=64 Gradient Accumulation steps = 1 Total optimization steps = 816 evaluation_strategy=IntervalStrategy.NO prediction_loss_only=False per_device_train_batch_size=8 per_device_eval_batch_size=8 adam_beta1=0.9 adam_beta2=0.999 adam_epsilon=1e-08, max_grad_norm=1.0 lr_scheduler_type=SchedulerType.LINEAR warmup_ratio=0.0 seed=42 eval_steps=500 metric_for_best_model=None greater_is_better=None label_smoothing_factor=0.0 ``` ## Performance perplexity = 4.3808 Some of my work: - [Domain-Adaptation Project](https://github.com/adityaarunsinghal/Domain-Adaptation/) ---
textattack/roberta-base-rotten_tomatoes
textattack
2021-05-20T22:18:23Z
8
1
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
## roberta-base fine-tuned with TextAttack on the rotten_tomatoes dataset This `roberta-base` model was fine-tuned for sequence classificationusing TextAttack and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned for 10 epochs with a batch size of 128, a learning rate of 5e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.9033771106941839, as measured by the eval set accuracy, found after 9 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/roberta-base-rotten-tomatoes
textattack
2021-05-20T22:17:29Z
34
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
## TextAttack Model Card This `roberta-base` model was fine-tuned for sequence classificationusing TextAttack and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned for 10 epochs with a batch size of 64, a learning rate of 2e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.9033771106941839, as measured by the eval set accuracy, found after 2 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/roberta-base-ag-news
textattack
2021-05-20T22:15:20Z
487
2
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
## TextAttack Model CardThis `roberta-base` model was fine-tuned for sequence classification using TextAttack and the ag_news dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 5e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.9469736842105263, as measured by the eval set accuracy, found after 4 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/roberta-base-RTE
textattack
2021-05-20T22:10:37Z
122
1
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
## TextAttack Model Card This `roberta-base` model was fine-tuned for sequence classification using TextAttack and the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 2e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.7942238267148014, as measured by the eval set accuracy, found after 3 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/roberta-base-MRPC
textattack
2021-05-20T22:07:47Z
206
2
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
## TextAttack Model Card This `roberta-base` model was fine-tuned for sequence classification using TextAttack and the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 3e-05, and a maximum sequence length of 256. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.9117647058823529, as measured by the eval set accuracy, found after 2 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
simonlevine/clinical-longformer
simonlevine
2021-05-20T21:25:09Z
19
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
- You'll need to instantiate a special RoBERTa class. Though technically a "Longformer", the elongated RoBERTa model will still need to be pulled in as such. - To do so, use the following classes: ```python class RobertaLongSelfAttention(LongformerSelfAttention): def forward( self, hidden_states, attention_mask=None, head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, output_attentions=False, ): return super().forward(hidden_states, attention_mask=attention_mask, output_attentions=output_attentions) class RobertaLongForMaskedLM(RobertaForMaskedLM): def __init__(self, config): super().__init__(config) for i, layer in enumerate(self.roberta.encoder.layer): # replace the `modeling_bert.BertSelfAttention` object with `LongformerSelfAttention` layer.attention.self = RobertaLongSelfAttention(config, layer_id=i) ``` - Then, pull the model as ```RobertaLongForMaskedLM.from_pretrained('simonlevine/bioclinical-roberta-long')``` - Now, it can be used as usual. Note you may get untrained weights warnings. - Note that you can replace ```RobertaForMaskedLM``` with a different task-specific RoBERTa from Huggingface, such as RobertaForSequenceClassification.
seyonec/ChemBERTa-zinc-base-v1
seyonec
2021-05-20T20:55:33Z
96,218
46
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "chemistry", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- tags: - chemistry --- # ChemBERTa: Training a BERT-like transformer model for masked language modelling of chemical SMILES strings. Deep learning for chemistry and materials science remains a novel field with lots of potiential. However, the popularity of transfer learning based methods in areas such as NLP and computer vision have not yet been effectively developed in computational chemistry + machine learning. Using HuggingFace's suite of models and the ByteLevel tokenizer, we are able to train on a large corpus of 100k SMILES strings from a commonly known benchmark dataset, ZINC. Training RoBERTa over 5 epochs, the model achieves a decent loss of 0.398, but may likely continue to decline if trained for a larger number of epochs. The model can predict tokens within a SMILES sequence/molecule, allowing for variants of a molecule within discoverable chemical space to be predicted. By applying the representations of functional groups and atoms learned by the model, we can try to tackle problems of toxicity, solubility, drug-likeness, and synthesis accessibility on smaller datasets using the learned representations as features for graph convolution and attention models on the graph structure of molecules, as well as fine-tuning of BERT. Finally, we propose the use of attention visualization as a helpful tool for chemistry practitioners and students to quickly identify important substructures in various chemical properties. Additionally, visualization of the attention mechanism have been seen through previous research as incredibly valuable towards chemical reaction classification. The applications of open-sourcing large-scale transformer models such as RoBERTa with HuggingFace may allow for the acceleration of these individual research directions. A link to a repository which includes the training, uploading and evaluation notebook (with sample predictions on compounds such as Remdesivir) can be found [here](https://github.com/seyonechithrananda/bert-loves-chemistry). All of the notebooks can be copied into a new Colab runtime for easy execution. Thanks for checking this out! - Seyone
pulp/CHILDES-ParentBERTo
pulp
2021-05-20T19:46:06Z
5
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
The language model trained on a fill-mask task with all the North American parent's data in CHILDES. The parent's data can be found here: https://github.com/xiaomeng-ma/CHILDES
prajjwal1/roberta-base-mnli
prajjwal1
2021-05-20T19:31:02Z
4
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
Roberta-base trained on MNLI. | Task | Accuracy | |---------|----------| | MNLI | 86.32 | | MNLI-mm | 86.43 | You can also check out: - `prajjwal1/roberta-base-mnli` - `prajjwal1/roberta-large-mnli` - `prajjwal1/albert-base-v2-mnli` - `prajjwal1/albert-base-v1-mnli` - `prajjwal1/albert-large-v2-mnli` [@prajjwal_1](https://twitter.com/prajjwal_1)
pradhyra/AWSBlogBert
pradhyra
2021-05-20T19:30:09Z
9
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
This model is pre-trained on blog articles from AWS Blogs. ## Pre-training corpora The input text contains around 3000 blog articles on [AWS Blogs website](https://aws.amazon.com/blogs/) technical subject matter including AWS products, tools and tutorials. ## Pre-training details I picked a Roberta architecture for masked language modeling (6-layer, 768-hidden, 12-heads, 82M parameters) and its corresponding ByteLevelBPE tokenization strategy. I then followed HuggingFace's Transformers [blog post](https://huggingface.co/blog/how-to-train) to train the model. I chose to follow the following training set-up: 28k training steps with batches of 64 sequences of length 512 with an initial learning rate 5e-5. The model acheived a training loss of 3.6 on the MLM task over 10 epochs.
nyu-mll/roberta-med-small-1M-3
nyu-mll
2021-05-20T19:09:09Z
5
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# RoBERTa Pretrained on Smaller Datasets We pretrain RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). We release 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: We combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1. ### Hyperparameters and Validation Perplexity The hyperparameters and validation perplexities corresponding to each model are as follows: | Model Name | Training Size | Model Size | Max Steps | Batch Size | Validation Perplexity | |--------------------------|---------------|------------|-----------|------------|-----------------------| | [roberta-base-1B-1][link-roberta-base-1B-1] | 1B | BASE | 100K | 512 | 3.93 | | [roberta-base-1B-2][link-roberta-base-1B-2] | 1B | BASE | 31K | 1024 | 4.25 | | [roberta-base-1B-3][link-roberta-base-1B-3] | 1B | BASE | 31K | 4096 | 3.84 | | [roberta-base-100M-1][link-roberta-base-100M-1] | 100M | BASE | 100K | 512 | 4.99 | | [roberta-base-100M-2][link-roberta-base-100M-2] | 100M | BASE | 31K | 1024 | 4.61 | | [roberta-base-100M-3][link-roberta-base-100M-3] | 100M | BASE | 31K | 512 | 5.02 | | [roberta-base-10M-1][link-roberta-base-10M-1] | 10M | BASE | 10K | 1024 | 11.31 | | [roberta-base-10M-2][link-roberta-base-10M-2] | 10M | BASE | 10K | 512 | 10.78 | | [roberta-base-10M-3][link-roberta-base-10M-3] | 10M | BASE | 31K | 512 | 11.58 | | [roberta-med-small-1M-1][link-roberta-med-small-1M-1] | 1M | MED-SMALL | 100K | 512 | 153.38 | | [roberta-med-small-1M-2][link-roberta-med-small-1M-2] | 1M | MED-SMALL | 10K | 512 | 134.18 | | [roberta-med-small-1M-3][link-roberta-med-small-1M-3] | 1M | MED-SMALL | 31K | 512 | 139.39 | The hyperparameters corresponding to model sizes mentioned above are as follows: | Model Size | L | AH | HS | FFN | P | |------------|----|----|-----|------|------| | BASE | 12 | 12 | 768 | 3072 | 125M | | MED-SMALL | 6 | 8 | 512 | 2048 | 45M | (AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters.) For other hyperparameters, we select: - Peak Learning rate: 5e-4 - Warmup Steps: 6% of max steps - Dropout: 0.1 [link-roberta-med-small-1M-1]: https://huggingface.co/nyu-mll/roberta-med-small-1M-1 [link-roberta-med-small-1M-2]: https://huggingface.co/nyu-mll/roberta-med-small-1M-2 [link-roberta-med-small-1M-3]: https://huggingface.co/nyu-mll/roberta-med-small-1M-3 [link-roberta-base-10M-1]: https://huggingface.co/nyu-mll/roberta-base-10M-1 [link-roberta-base-10M-2]: https://huggingface.co/nyu-mll/roberta-base-10M-2 [link-roberta-base-10M-3]: https://huggingface.co/nyu-mll/roberta-base-10M-3 [link-roberta-base-100M-1]: https://huggingface.co/nyu-mll/roberta-base-100M-1 [link-roberta-base-100M-2]: https://huggingface.co/nyu-mll/roberta-base-100M-2 [link-roberta-base-100M-3]: https://huggingface.co/nyu-mll/roberta-base-100M-3 [link-roberta-base-1B-1]: https://huggingface.co/nyu-mll/roberta-base-1B-1 [link-roberta-base-1B-2]: https://huggingface.co/nyu-mll/roberta-base-1B-2 [link-roberta-base-1B-3]: https://huggingface.co/nyu-mll/roberta-base-1B-3
nyu-mll/roberta-med-small-1M-2
nyu-mll
2021-05-20T19:07:56Z
5
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# RoBERTa Pretrained on Smaller Datasets We pretrain RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). We release 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: We combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1. ### Hyperparameters and Validation Perplexity The hyperparameters and validation perplexities corresponding to each model are as follows: | Model Name | Training Size | Model Size | Max Steps | Batch Size | Validation Perplexity | |--------------------------|---------------|------------|-----------|------------|-----------------------| | [roberta-base-1B-1][link-roberta-base-1B-1] | 1B | BASE | 100K | 512 | 3.93 | | [roberta-base-1B-2][link-roberta-base-1B-2] | 1B | BASE | 31K | 1024 | 4.25 | | [roberta-base-1B-3][link-roberta-base-1B-3] | 1B | BASE | 31K | 4096 | 3.84 | | [roberta-base-100M-1][link-roberta-base-100M-1] | 100M | BASE | 100K | 512 | 4.99 | | [roberta-base-100M-2][link-roberta-base-100M-2] | 100M | BASE | 31K | 1024 | 4.61 | | [roberta-base-100M-3][link-roberta-base-100M-3] | 100M | BASE | 31K | 512 | 5.02 | | [roberta-base-10M-1][link-roberta-base-10M-1] | 10M | BASE | 10K | 1024 | 11.31 | | [roberta-base-10M-2][link-roberta-base-10M-2] | 10M | BASE | 10K | 512 | 10.78 | | [roberta-base-10M-3][link-roberta-base-10M-3] | 10M | BASE | 31K | 512 | 11.58 | | [roberta-med-small-1M-1][link-roberta-med-small-1M-1] | 1M | MED-SMALL | 100K | 512 | 153.38 | | [roberta-med-small-1M-2][link-roberta-med-small-1M-2] | 1M | MED-SMALL | 10K | 512 | 134.18 | | [roberta-med-small-1M-3][link-roberta-med-small-1M-3] | 1M | MED-SMALL | 31K | 512 | 139.39 | The hyperparameters corresponding to model sizes mentioned above are as follows: | Model Size | L | AH | HS | FFN | P | |------------|----|----|-----|------|------| | BASE | 12 | 12 | 768 | 3072 | 125M | | MED-SMALL | 6 | 8 | 512 | 2048 | 45M | (AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters.) For other hyperparameters, we select: - Peak Learning rate: 5e-4 - Warmup Steps: 6% of max steps - Dropout: 0.1 [link-roberta-med-small-1M-1]: https://huggingface.co/nyu-mll/roberta-med-small-1M-1 [link-roberta-med-small-1M-2]: https://huggingface.co/nyu-mll/roberta-med-small-1M-2 [link-roberta-med-small-1M-3]: https://huggingface.co/nyu-mll/roberta-med-small-1M-3 [link-roberta-base-10M-1]: https://huggingface.co/nyu-mll/roberta-base-10M-1 [link-roberta-base-10M-2]: https://huggingface.co/nyu-mll/roberta-base-10M-2 [link-roberta-base-10M-3]: https://huggingface.co/nyu-mll/roberta-base-10M-3 [link-roberta-base-100M-1]: https://huggingface.co/nyu-mll/roberta-base-100M-1 [link-roberta-base-100M-2]: https://huggingface.co/nyu-mll/roberta-base-100M-2 [link-roberta-base-100M-3]: https://huggingface.co/nyu-mll/roberta-base-100M-3 [link-roberta-base-1B-1]: https://huggingface.co/nyu-mll/roberta-base-1B-1 [link-roberta-base-1B-2]: https://huggingface.co/nyu-mll/roberta-base-1B-2 [link-roberta-base-1B-3]: https://huggingface.co/nyu-mll/roberta-base-1B-3
nyu-mll/roberta-med-small-1M-1
nyu-mll
2021-05-20T19:06:25Z
8
1
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# RoBERTa Pretrained on Smaller Datasets We pretrain RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). We release 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: We combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1. ### Hyperparameters and Validation Perplexity The hyperparameters and validation perplexities corresponding to each model are as follows: | Model Name | Training Size | Model Size | Max Steps | Batch Size | Validation Perplexity | |--------------------------|---------------|------------|-----------|------------|-----------------------| | [roberta-base-1B-1][link-roberta-base-1B-1] | 1B | BASE | 100K | 512 | 3.93 | | [roberta-base-1B-2][link-roberta-base-1B-2] | 1B | BASE | 31K | 1024 | 4.25 | | [roberta-base-1B-3][link-roberta-base-1B-3] | 1B | BASE | 31K | 4096 | 3.84 | | [roberta-base-100M-1][link-roberta-base-100M-1] | 100M | BASE | 100K | 512 | 4.99 | | [roberta-base-100M-2][link-roberta-base-100M-2] | 100M | BASE | 31K | 1024 | 4.61 | | [roberta-base-100M-3][link-roberta-base-100M-3] | 100M | BASE | 31K | 512 | 5.02 | | [roberta-base-10M-1][link-roberta-base-10M-1] | 10M | BASE | 10K | 1024 | 11.31 | | [roberta-base-10M-2][link-roberta-base-10M-2] | 10M | BASE | 10K | 512 | 10.78 | | [roberta-base-10M-3][link-roberta-base-10M-3] | 10M | BASE | 31K | 512 | 11.58 | | [roberta-med-small-1M-1][link-roberta-med-small-1M-1] | 1M | MED-SMALL | 100K | 512 | 153.38 | | [roberta-med-small-1M-2][link-roberta-med-small-1M-2] | 1M | MED-SMALL | 10K | 512 | 134.18 | | [roberta-med-small-1M-3][link-roberta-med-small-1M-3] | 1M | MED-SMALL | 31K | 512 | 139.39 | The hyperparameters corresponding to model sizes mentioned above are as follows: | Model Size | L | AH | HS | FFN | P | |------------|----|----|-----|------|------| | BASE | 12 | 12 | 768 | 3072 | 125M | | MED-SMALL | 6 | 8 | 512 | 2048 | 45M | (AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters.) For other hyperparameters, we select: - Peak Learning rate: 5e-4 - Warmup Steps: 6% of max steps - Dropout: 0.1 [link-roberta-med-small-1M-1]: https://huggingface.co/nyu-mll/roberta-med-small-1M-1 [link-roberta-med-small-1M-2]: https://huggingface.co/nyu-mll/roberta-med-small-1M-2 [link-roberta-med-small-1M-3]: https://huggingface.co/nyu-mll/roberta-med-small-1M-3 [link-roberta-base-10M-1]: https://huggingface.co/nyu-mll/roberta-base-10M-1 [link-roberta-base-10M-2]: https://huggingface.co/nyu-mll/roberta-base-10M-2 [link-roberta-base-10M-3]: https://huggingface.co/nyu-mll/roberta-base-10M-3 [link-roberta-base-100M-1]: https://huggingface.co/nyu-mll/roberta-base-100M-1 [link-roberta-base-100M-2]: https://huggingface.co/nyu-mll/roberta-base-100M-2 [link-roberta-base-100M-3]: https://huggingface.co/nyu-mll/roberta-base-100M-3 [link-roberta-base-1B-1]: https://huggingface.co/nyu-mll/roberta-base-1B-1 [link-roberta-base-1B-2]: https://huggingface.co/nyu-mll/roberta-base-1B-2 [link-roberta-base-1B-3]: https://huggingface.co/nyu-mll/roberta-base-1B-3
nyu-mll/roberta-base-1B-3
nyu-mll
2021-05-20T19:05:43Z
4
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# RoBERTa Pretrained on Smaller Datasets We pretrain RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). We release 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: We combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1. ### Hyperparameters and Validation Perplexity The hyperparameters and validation perplexities corresponding to each model are as follows: | Model Name | Training Size | Model Size | Max Steps | Batch Size | Validation Perplexity | |--------------------------|---------------|------------|-----------|------------|-----------------------| | [roberta-base-1B-1][link-roberta-base-1B-1] | 1B | BASE | 100K | 512 | 3.93 | | [roberta-base-1B-2][link-roberta-base-1B-2] | 1B | BASE | 31K | 1024 | 4.25 | | [roberta-base-1B-3][link-roberta-base-1B-3] | 1B | BASE | 31K | 4096 | 3.84 | | [roberta-base-100M-1][link-roberta-base-100M-1] | 100M | BASE | 100K | 512 | 4.99 | | [roberta-base-100M-2][link-roberta-base-100M-2] | 100M | BASE | 31K | 1024 | 4.61 | | [roberta-base-100M-3][link-roberta-base-100M-3] | 100M | BASE | 31K | 512 | 5.02 | | [roberta-base-10M-1][link-roberta-base-10M-1] | 10M | BASE | 10K | 1024 | 11.31 | | [roberta-base-10M-2][link-roberta-base-10M-2] | 10M | BASE | 10K | 512 | 10.78 | | [roberta-base-10M-3][link-roberta-base-10M-3] | 10M | BASE | 31K | 512 | 11.58 | | [roberta-med-small-1M-1][link-roberta-med-small-1M-1] | 1M | MED-SMALL | 100K | 512 | 153.38 | | [roberta-med-small-1M-2][link-roberta-med-small-1M-2] | 1M | MED-SMALL | 10K | 512 | 134.18 | | [roberta-med-small-1M-3][link-roberta-med-small-1M-3] | 1M | MED-SMALL | 31K | 512 | 139.39 | The hyperparameters corresponding to model sizes mentioned above are as follows: | Model Size | L | AH | HS | FFN | P | |------------|----|----|-----|------|------| | BASE | 12 | 12 | 768 | 3072 | 125M | | MED-SMALL | 6 | 8 | 512 | 2048 | 45M | (AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters.) For other hyperparameters, we select: - Peak Learning rate: 5e-4 - Warmup Steps: 6% of max steps - Dropout: 0.1 [link-roberta-med-small-1M-1]: https://huggingface.co/nyu-mll/roberta-med-small-1M-1 [link-roberta-med-small-1M-2]: https://huggingface.co/nyu-mll/roberta-med-small-1M-2 [link-roberta-med-small-1M-3]: https://huggingface.co/nyu-mll/roberta-med-small-1M-3 [link-roberta-base-10M-1]: https://huggingface.co/nyu-mll/roberta-base-10M-1 [link-roberta-base-10M-2]: https://huggingface.co/nyu-mll/roberta-base-10M-2 [link-roberta-base-10M-3]: https://huggingface.co/nyu-mll/roberta-base-10M-3 [link-roberta-base-100M-1]: https://huggingface.co/nyu-mll/roberta-base-100M-1 [link-roberta-base-100M-2]: https://huggingface.co/nyu-mll/roberta-base-100M-2 [link-roberta-base-100M-3]: https://huggingface.co/nyu-mll/roberta-base-100M-3 [link-roberta-base-1B-1]: https://huggingface.co/nyu-mll/roberta-base-1B-1 [link-roberta-base-1B-2]: https://huggingface.co/nyu-mll/roberta-base-1B-2 [link-roberta-base-1B-3]: https://huggingface.co/nyu-mll/roberta-base-1B-3
nyu-mll/roberta-base-1B-2
nyu-mll
2021-05-20T19:04:39Z
5
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# RoBERTa Pretrained on Smaller Datasets We pretrain RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). We release 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: We combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1. ### Hyperparameters and Validation Perplexity The hyperparameters and validation perplexities corresponding to each model are as follows: | Model Name | Training Size | Model Size | Max Steps | Batch Size | Validation Perplexity | |--------------------------|---------------|------------|-----------|------------|-----------------------| | [roberta-base-1B-1][link-roberta-base-1B-1] | 1B | BASE | 100K | 512 | 3.93 | | [roberta-base-1B-2][link-roberta-base-1B-2] | 1B | BASE | 31K | 1024 | 4.25 | | [roberta-base-1B-3][link-roberta-base-1B-3] | 1B | BASE | 31K | 4096 | 3.84 | | [roberta-base-100M-1][link-roberta-base-100M-1] | 100M | BASE | 100K | 512 | 4.99 | | [roberta-base-100M-2][link-roberta-base-100M-2] | 100M | BASE | 31K | 1024 | 4.61 | | [roberta-base-100M-3][link-roberta-base-100M-3] | 100M | BASE | 31K | 512 | 5.02 | | [roberta-base-10M-1][link-roberta-base-10M-1] | 10M | BASE | 10K | 1024 | 11.31 | | [roberta-base-10M-2][link-roberta-base-10M-2] | 10M | BASE | 10K | 512 | 10.78 | | [roberta-base-10M-3][link-roberta-base-10M-3] | 10M | BASE | 31K | 512 | 11.58 | | [roberta-med-small-1M-1][link-roberta-med-small-1M-1] | 1M | MED-SMALL | 100K | 512 | 153.38 | | [roberta-med-small-1M-2][link-roberta-med-small-1M-2] | 1M | MED-SMALL | 10K | 512 | 134.18 | | [roberta-med-small-1M-3][link-roberta-med-small-1M-3] | 1M | MED-SMALL | 31K | 512 | 139.39 | The hyperparameters corresponding to model sizes mentioned above are as follows: | Model Size | L | AH | HS | FFN | P | |------------|----|----|-----|------|------| | BASE | 12 | 12 | 768 | 3072 | 125M | | MED-SMALL | 6 | 8 | 512 | 2048 | 45M | (AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters.) For other hyperparameters, we select: - Peak Learning rate: 5e-4 - Warmup Steps: 6% of max steps - Dropout: 0.1 [link-roberta-med-small-1M-1]: https://huggingface.co/nyu-mll/roberta-med-small-1M-1 [link-roberta-med-small-1M-2]: https://huggingface.co/nyu-mll/roberta-med-small-1M-2 [link-roberta-med-small-1M-3]: https://huggingface.co/nyu-mll/roberta-med-small-1M-3 [link-roberta-base-10M-1]: https://huggingface.co/nyu-mll/roberta-base-10M-1 [link-roberta-base-10M-2]: https://huggingface.co/nyu-mll/roberta-base-10M-2 [link-roberta-base-10M-3]: https://huggingface.co/nyu-mll/roberta-base-10M-3 [link-roberta-base-100M-1]: https://huggingface.co/nyu-mll/roberta-base-100M-1 [link-roberta-base-100M-2]: https://huggingface.co/nyu-mll/roberta-base-100M-2 [link-roberta-base-100M-3]: https://huggingface.co/nyu-mll/roberta-base-100M-3 [link-roberta-base-1B-1]: https://huggingface.co/nyu-mll/roberta-base-1B-1 [link-roberta-base-1B-2]: https://huggingface.co/nyu-mll/roberta-base-1B-2 [link-roberta-base-1B-3]: https://huggingface.co/nyu-mll/roberta-base-1B-3
nyu-mll/roberta-base-10M-3
nyu-mll
2021-05-20T19:00:36Z
4
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# RoBERTa Pretrained on Smaller Datasets We pretrain RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). We release 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: We combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1. ### Hyperparameters and Validation Perplexity The hyperparameters and validation perplexities corresponding to each model are as follows: | Model Name | Training Size | Model Size | Max Steps | Batch Size | Validation Perplexity | |--------------------------|---------------|------------|-----------|------------|-----------------------| | [roberta-base-1B-1][link-roberta-base-1B-1] | 1B | BASE | 100K | 512 | 3.93 | | [roberta-base-1B-2][link-roberta-base-1B-2] | 1B | BASE | 31K | 1024 | 4.25 | | [roberta-base-1B-3][link-roberta-base-1B-3] | 1B | BASE | 31K | 4096 | 3.84 | | [roberta-base-100M-1][link-roberta-base-100M-1] | 100M | BASE | 100K | 512 | 4.99 | | [roberta-base-100M-2][link-roberta-base-100M-2] | 100M | BASE | 31K | 1024 | 4.61 | | [roberta-base-100M-3][link-roberta-base-100M-3] | 100M | BASE | 31K | 512 | 5.02 | | [roberta-base-10M-1][link-roberta-base-10M-1] | 10M | BASE | 10K | 1024 | 11.31 | | [roberta-base-10M-2][link-roberta-base-10M-2] | 10M | BASE | 10K | 512 | 10.78 | | [roberta-base-10M-3][link-roberta-base-10M-3] | 10M | BASE | 31K | 512 | 11.58 | | [roberta-med-small-1M-1][link-roberta-med-small-1M-1] | 1M | MED-SMALL | 100K | 512 | 153.38 | | [roberta-med-small-1M-2][link-roberta-med-small-1M-2] | 1M | MED-SMALL | 10K | 512 | 134.18 | | [roberta-med-small-1M-3][link-roberta-med-small-1M-3] | 1M | MED-SMALL | 31K | 512 | 139.39 | The hyperparameters corresponding to model sizes mentioned above are as follows: | Model Size | L | AH | HS | FFN | P | |------------|----|----|-----|------|------| | BASE | 12 | 12 | 768 | 3072 | 125M | | MED-SMALL | 6 | 8 | 512 | 2048 | 45M | (AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters.) For other hyperparameters, we select: - Peak Learning rate: 5e-4 - Warmup Steps: 6% of max steps - Dropout: 0.1 [link-roberta-med-small-1M-1]: https://huggingface.co/nyu-mll/roberta-med-small-1M-1 [link-roberta-med-small-1M-2]: https://huggingface.co/nyu-mll/roberta-med-small-1M-2 [link-roberta-med-small-1M-3]: https://huggingface.co/nyu-mll/roberta-med-small-1M-3 [link-roberta-base-10M-1]: https://huggingface.co/nyu-mll/roberta-base-10M-1 [link-roberta-base-10M-2]: https://huggingface.co/nyu-mll/roberta-base-10M-2 [link-roberta-base-10M-3]: https://huggingface.co/nyu-mll/roberta-base-10M-3 [link-roberta-base-100M-1]: https://huggingface.co/nyu-mll/roberta-base-100M-1 [link-roberta-base-100M-2]: https://huggingface.co/nyu-mll/roberta-base-100M-2 [link-roberta-base-100M-3]: https://huggingface.co/nyu-mll/roberta-base-100M-3 [link-roberta-base-1B-1]: https://huggingface.co/nyu-mll/roberta-base-1B-1 [link-roberta-base-1B-2]: https://huggingface.co/nyu-mll/roberta-base-1B-2 [link-roberta-base-1B-3]: https://huggingface.co/nyu-mll/roberta-base-1B-3
nyu-mll/roberta-base-10M-2
nyu-mll
2021-05-20T18:58:09Z
7
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# RoBERTa Pretrained on Smaller Datasets We pretrain RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). We release 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: We combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1. ### Hyperparameters and Validation Perplexity The hyperparameters and validation perplexities corresponding to each model are as follows: | Model Name | Training Size | Model Size | Max Steps | Batch Size | Validation Perplexity | |--------------------------|---------------|------------|-----------|------------|-----------------------| | [roberta-base-1B-1][link-roberta-base-1B-1] | 1B | BASE | 100K | 512 | 3.93 | | [roberta-base-1B-2][link-roberta-base-1B-2] | 1B | BASE | 31K | 1024 | 4.25 | | [roberta-base-1B-3][link-roberta-base-1B-3] | 1B | BASE | 31K | 4096 | 3.84 | | [roberta-base-100M-1][link-roberta-base-100M-1] | 100M | BASE | 100K | 512 | 4.99 | | [roberta-base-100M-2][link-roberta-base-100M-2] | 100M | BASE | 31K | 1024 | 4.61 | | [roberta-base-100M-3][link-roberta-base-100M-3] | 100M | BASE | 31K | 512 | 5.02 | | [roberta-base-10M-1][link-roberta-base-10M-1] | 10M | BASE | 10K | 1024 | 11.31 | | [roberta-base-10M-2][link-roberta-base-10M-2] | 10M | BASE | 10K | 512 | 10.78 | | [roberta-base-10M-3][link-roberta-base-10M-3] | 10M | BASE | 31K | 512 | 11.58 | | [roberta-med-small-1M-1][link-roberta-med-small-1M-1] | 1M | MED-SMALL | 100K | 512 | 153.38 | | [roberta-med-small-1M-2][link-roberta-med-small-1M-2] | 1M | MED-SMALL | 10K | 512 | 134.18 | | [roberta-med-small-1M-3][link-roberta-med-small-1M-3] | 1M | MED-SMALL | 31K | 512 | 139.39 | The hyperparameters corresponding to model sizes mentioned above are as follows: | Model Size | L | AH | HS | FFN | P | |------------|----|----|-----|------|------| | BASE | 12 | 12 | 768 | 3072 | 125M | | MED-SMALL | 6 | 8 | 512 | 2048 | 45M | (AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters.) For other hyperparameters, we select: - Peak Learning rate: 5e-4 - Warmup Steps: 6% of max steps - Dropout: 0.1 [link-roberta-med-small-1M-1]: https://huggingface.co/nyu-mll/roberta-med-small-1M-1 [link-roberta-med-small-1M-2]: https://huggingface.co/nyu-mll/roberta-med-small-1M-2 [link-roberta-med-small-1M-3]: https://huggingface.co/nyu-mll/roberta-med-small-1M-3 [link-roberta-base-10M-1]: https://huggingface.co/nyu-mll/roberta-base-10M-1 [link-roberta-base-10M-2]: https://huggingface.co/nyu-mll/roberta-base-10M-2 [link-roberta-base-10M-3]: https://huggingface.co/nyu-mll/roberta-base-10M-3 [link-roberta-base-100M-1]: https://huggingface.co/nyu-mll/roberta-base-100M-1 [link-roberta-base-100M-2]: https://huggingface.co/nyu-mll/roberta-base-100M-2 [link-roberta-base-100M-3]: https://huggingface.co/nyu-mll/roberta-base-100M-3 [link-roberta-base-1B-1]: https://huggingface.co/nyu-mll/roberta-base-1B-1 [link-roberta-base-1B-2]: https://huggingface.co/nyu-mll/roberta-base-1B-2 [link-roberta-base-1B-3]: https://huggingface.co/nyu-mll/roberta-base-1B-3
nyu-mll/roberta-base-100M-3
nyu-mll
2021-05-20T18:56:02Z
15
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# RoBERTa Pretrained on Smaller Datasets We pretrain RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). We release 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: We combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1. ### Hyperparameters and Validation Perplexity The hyperparameters and validation perplexities corresponding to each model are as follows: | Model Name | Training Size | Model Size | Max Steps | Batch Size | Validation Perplexity | |--------------------------|---------------|------------|-----------|------------|-----------------------| | [roberta-base-1B-1][link-roberta-base-1B-1] | 1B | BASE | 100K | 512 | 3.93 | | [roberta-base-1B-2][link-roberta-base-1B-2] | 1B | BASE | 31K | 1024 | 4.25 | | [roberta-base-1B-3][link-roberta-base-1B-3] | 1B | BASE | 31K | 4096 | 3.84 | | [roberta-base-100M-1][link-roberta-base-100M-1] | 100M | BASE | 100K | 512 | 4.99 | | [roberta-base-100M-2][link-roberta-base-100M-2] | 100M | BASE | 31K | 1024 | 4.61 | | [roberta-base-100M-3][link-roberta-base-100M-3] | 100M | BASE | 31K | 512 | 5.02 | | [roberta-base-10M-1][link-roberta-base-10M-1] | 10M | BASE | 10K | 1024 | 11.31 | | [roberta-base-10M-2][link-roberta-base-10M-2] | 10M | BASE | 10K | 512 | 10.78 | | [roberta-base-10M-3][link-roberta-base-10M-3] | 10M | BASE | 31K | 512 | 11.58 | | [roberta-med-small-1M-1][link-roberta-med-small-1M-1] | 1M | MED-SMALL | 100K | 512 | 153.38 | | [roberta-med-small-1M-2][link-roberta-med-small-1M-2] | 1M | MED-SMALL | 10K | 512 | 134.18 | | [roberta-med-small-1M-3][link-roberta-med-small-1M-3] | 1M | MED-SMALL | 31K | 512 | 139.39 | The hyperparameters corresponding to model sizes mentioned above are as follows: | Model Size | L | AH | HS | FFN | P | |------------|----|----|-----|------|------| | BASE | 12 | 12 | 768 | 3072 | 125M | | MED-SMALL | 6 | 8 | 512 | 2048 | 45M | (AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters.) For other hyperparameters, we select: - Peak Learning rate: 5e-4 - Warmup Steps: 6% of max steps - Dropout: 0.1 [link-roberta-med-small-1M-1]: https://huggingface.co/nyu-mll/roberta-med-small-1M-1 [link-roberta-med-small-1M-2]: https://huggingface.co/nyu-mll/roberta-med-small-1M-2 [link-roberta-med-small-1M-3]: https://huggingface.co/nyu-mll/roberta-med-small-1M-3 [link-roberta-base-10M-1]: https://huggingface.co/nyu-mll/roberta-base-10M-1 [link-roberta-base-10M-2]: https://huggingface.co/nyu-mll/roberta-base-10M-2 [link-roberta-base-10M-3]: https://huggingface.co/nyu-mll/roberta-base-10M-3 [link-roberta-base-100M-1]: https://huggingface.co/nyu-mll/roberta-base-100M-1 [link-roberta-base-100M-2]: https://huggingface.co/nyu-mll/roberta-base-100M-2 [link-roberta-base-100M-3]: https://huggingface.co/nyu-mll/roberta-base-100M-3 [link-roberta-base-1B-1]: https://huggingface.co/nyu-mll/roberta-base-1B-1 [link-roberta-base-1B-2]: https://huggingface.co/nyu-mll/roberta-base-1B-2 [link-roberta-base-1B-3]: https://huggingface.co/nyu-mll/roberta-base-1B-3
mudes/en-large
mudes
2021-05-20T18:36:06Z
5
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "token-classification", "mudes", "en", "arxiv:2102.09665", "arxiv:2104.04630", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: en tags: - mudes license: apache-2.0 --- # MUDES - {Mu}ltilingual {De}tection of Offensive {S}pans We provide state-of-the-art models to detect toxic spans in social media texts. We introduce our framework in [this paper](https://arxiv.org/abs/2102.09665). We have evaluated our models on Toxic Spans task at SemEval 2021 (Task 5). Our participation in the task is detailed in [this paper](https://arxiv.org/abs/2104.04630). ## Usage You can use this model when you have [MUDES](https://github.com/TharinduDR/MUDES) installed: ```bash pip install mudes ``` Then you can use the model like this: ```python from mudes.app.mudes_app import MUDESApp app = MUDESApp("en-large", use_cuda=False) print(app.predict_toxic_spans("You motherfucking cunt", spans=True)) ``` ## System Demonstration An experimental demonstration interface called MUDES-UI has been released on [GitHub](https://github.com/TharinduDR/MUDES-UI) and can be checked out in [here](http://rgcl.wlv.ac.uk/mudes/). ## Citing & Authors If you find this model helpful, feel free to cite our publications ```bibtex @inproceedings{ranasinghemudes, title={{MUDES: Multilingual Detection of Offensive Spans}}, author={Tharindu Ranasinghe and Marcos Zampieri}, booktitle={Proceedings of NAACL}, year={2021} } ``` ```bibtex @inproceedings{ranasinghe2021semeval, title={{WLV-RIT at SemEval-2021 Task 5: A Neural Transformer Framework for Detecting Toxic Spans}}, author = {Ranasinghe, Tharindu and Sarkar, Diptanu and Zampieri, Marcos and Ororbia, Alex}, booktitle={Proceedings of SemEval}, year={2021} } ```
mrm8488/roberta-large-finetuned-wsc
mrm8488
2021-05-20T18:30:59Z
8
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "arxiv:1905.06290", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# RoBERTa (large) fine-tuned on Winograd Schema Challenge (WSC) data Step from its original [repo](https://github.com/pytorch/fairseq/blob/master/examples/roberta/wsc/README.md) The following instructions can be used to finetune RoBERTa on the WSC training data provided by [SuperGLUE](https://super.gluebenchmark.com/). Note that there is high variance in the results. For our GLUE/SuperGLUE submission we swept over the learning rate (1e-5, 2e-5, 3e-5), batch size (16, 32, 64) and total number of updates (500, 1000, 2000, 3000), as well as the random seed. Out of ~100 runs we chose the best 7 models and ensembled them. **Approach:** The instructions below use a slightly different loss function than what's described in the original RoBERTa arXiv paper. In particular, [Kocijan et al. (2019)](https://arxiv.org/abs/1905.06290) introduce a margin ranking loss between `(query, candidate)` pairs with tunable hyperparameters alpha and beta. This is supported in our code as well with the `--wsc-alpha` and `--wsc-beta` arguments. However, we achieved slightly better (and more robust) results on the development set by instead using a single cross entropy loss term over the log-probabilities for the query and all mined candidates. **The candidates are mined using spaCy from each input sentence in isolation, so the approach remains strictly pointwise.** This reduces the number of hyperparameters and our best model achieved 92.3% development set accuracy, compared to ~90% accuracy for the margin loss. Later versions of the RoBERTa arXiv paper will describe this updated formulation. ### 1) Download the WSC data from the SuperGLUE website: ```bash wget https://dl.fbaipublicfiles.com/glue/superglue/data/v2/WSC.zip unzip WSC.zip # we also need to copy the RoBERTa dictionary into the same directory wget -O WSC/dict.txt https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt ``` ### 2) Finetune over the provided training data: ```bash TOTAL_NUM_UPDATES=2000 # Total number of training steps. WARMUP_UPDATES=250 # Linearly increase LR over this many steps. LR=2e-05 # Peak LR for polynomial LR scheduler. MAX_SENTENCES=16 # Batch size per GPU. SEED=1 # Random seed. ROBERTA_PATH=/path/to/roberta/model.pt # we use the --user-dir option to load the task and criterion # from the examples/roberta/wsc directory: FAIRSEQ_PATH=/path/to/fairseq FAIRSEQ_USER_DIR=${FAIRSEQ_PATH}/examples/roberta/wsc CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train WSC/ \ --restore-file $ROBERTA_PATH \ --reset-optimizer --reset-dataloader --reset-meters \ --no-epoch-checkpoints --no-last-checkpoints --no-save-optimizer-state \ --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric \ --valid-subset val \ --fp16 --ddp-backend no_c10d \ --user-dir $FAIRSEQ_USER_DIR \ --task wsc --criterion wsc --wsc-cross-entropy \ --arch roberta_large --bpe gpt2 --max-positions 512 \ --dropout 0.1 --attention-dropout 0.1 --weight-decay 0.01 \ --optimizer adam --adam-betas '(0.9, 0.98)' --adam-eps 1e-06 \ --lr-scheduler polynomial_decay --lr $LR \ --warmup-updates $WARMUP_UPDATES --total-num-update $TOTAL_NUM_UPDATES \ --max-sentences $MAX_SENTENCES \ --max-update $TOTAL_NUM_UPDATES \ --log-format simple --log-interval 100 \ --seed $SEED ``` The above command assumes training on 4 GPUs, but you can achieve the same results on a single GPU by adding `--update-freq=4`. ### 3) Evaluate ```python from fairseq.models.roberta import RobertaModel from examples.roberta.wsc import wsc_utils # also loads WSC task and criterion roberta = RobertaModel.from_pretrained('checkpoints', 'checkpoint_best.pt', 'WSC/') roberta.cuda() nsamples, ncorrect = 0, 0 for sentence, label in wsc_utils.jsonl_iterator('WSC/val.jsonl', eval=True): pred = roberta.disambiguate_pronoun(sentence) nsamples += 1 if pred == label: ncorrect += 1 print('Accuracy: ' + str(ncorrect / float(nsamples))) # Accuracy: 0.9230769230769231 ``` ## RoBERTa training on WinoGrande dataset We have also provided `winogrande` task and criterion for finetuning on the [WinoGrande](https://mosaic.allenai.org/projects/winogrande) like datasets where there are always two candidates and one is correct. It's more efficient implementation for such subcases. ```bash TOTAL_NUM_UPDATES=23750 # Total number of training steps. WARMUP_UPDATES=2375 # Linearly increase LR over this many steps. LR=1e-05 # Peak LR for polynomial LR scheduler. MAX_SENTENCES=32 # Batch size per GPU. SEED=1 # Random seed. ROBERTA_PATH=/path/to/roberta/model.pt # we use the --user-dir option to load the task and criterion # from the examples/roberta/wsc directory: FAIRSEQ_PATH=/path/to/fairseq FAIRSEQ_USER_DIR=${FAIRSEQ_PATH}/examples/roberta/wsc cd fairseq CUDA_VISIBLE_DEVICES=0 fairseq-train winogrande_1.0/ \ --restore-file $ROBERTA_PATH \ --reset-optimizer --reset-dataloader --reset-meters \ --no-epoch-checkpoints --no-last-checkpoints --no-save-optimizer-state \ --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric \ --valid-subset val \ --fp16 --ddp-backend no_c10d \ --user-dir $FAIRSEQ_USER_DIR \ --task winogrande --criterion winogrande \ --wsc-margin-alpha 5.0 --wsc-margin-beta 0.4 \ --arch roberta_large --bpe gpt2 --max-positions 512 \ --dropout 0.1 --attention-dropout 0.1 --weight-decay 0.01 \ --optimizer adam --adam-betas '(0.9, 0.98)' --adam-eps 1e-06 \ --lr-scheduler polynomial_decay --lr $LR \ --warmup-updates $WARMUP_UPDATES --total-num-update $TOTAL_NUM_UPDATES \ --max-sentences $MAX_SENTENCES \ --max-update $TOTAL_NUM_UPDATES \ --log-format simple --log-interval 100 ``` [Original repo](https://github.com/pytorch/fairseq/tree/master/examples/roberta/wsc)
mrm8488/roberta-base-1B-1-finetuned-squadv2
mrm8488
2021-05-20T18:27:20Z
13
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "question-answering", "en", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: en --- # RoBERTa-base (1B-1) + SQuAD v2 ❓ [roberta-base-1B-1](https://huggingface.co/nyu-mll/roberta-base-1B-1) fine-tuned on [SQUAD v2 dataset](https://rajpurkar.github.io/SQuAD-explorer/explore/v2.0/dev/) for **Q&A** downstream task. ## Details of the downstream task (Q&A) - Model 🧠 RoBERTa Pretrained on Smaller Datasets [NYU Machine Learning for Language](https://huggingface.co/nyu-mll) pretrained RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). They released 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: They combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1. ## Details of the downstream task (Q&A) - Dataset 📚 **S**tanford **Q**uestion **A**nswering **D**ataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. **SQuAD2.0** combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. ## Model training 🏋️‍ The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command: ```bash python transformers/examples/question-answering/run_squad.py \ --model_type roberta \ --model_name_or_path 'nyu-mll/roberta-base-1B-1' \ --do_eval \ --do_train \ --do_lower_case \ --train_file /content/dataset/train-v2.0.json \ --predict_file /content/dataset/dev-v2.0.json \ --per_gpu_train_batch_size 16 \ --learning_rate 3e-5 \ --num_train_epochs 10 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /content/output \ --overwrite_output_dir \ --save_steps 1000 \ --version_2_with_negative ``` ## Test set Results 🧾 | Metric | # Value | | ------ | --------- | | **EM** | **64.86** | | **F1** | **68.99** | ```json { 'exact': 64.86145034953255, 'f1': 68.9902640378272, 'total': 11873, 'HasAns_exact': 64.03508771929825, 'HasAns_f1': 72.3045554860189, 'HasAns_total': 5928, 'NoAns_exact': 65.68544995794785, 'NoAns_f1': 65.68544995794785, 'NoAns_total': 5945, 'best_exact': 64.86987282068559, 'best_exact_thresh': 0.0, 'best_f1': 68.99868650898054, 'best_f1_thresh': 0.0 } ``` ### Model in action 🚀 Fast usage with **pipelines**: ```python from transformers import pipeline QnA_pipeline = pipeline('question-answering', model='mrm8488/roberta-base-1B-1-finetuned-squadv2') QnA_pipeline({ 'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.', 'question': 'What has been discovered by scientists from China ?' }) # Output: {'answer': 'A new strain of flu', 'end': 19, 'score': 0.7145650685380576,'start': 0} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/distilroberta-finetuned-tweets-hate-speech
mrm8488
2021-05-20T18:25:15Z
6
6
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "twitter", "hate", "speech", "en", "dataset:tweets_hate_speech_detection", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en tags: - twitter - hate - speech datasets: - tweets_hate_speech_detection widget: - text: "the fuck done with #mansplaining and other bullshit." --- # distilroberta-base fine-tuned on tweets_hate_speech_detection dataset for hate speech detection Validation accuray: 0.98
mrm8488/codebert-base-finetuned-detect-insecure-code
mrm8488
2021-05-20T18:19:02Z
166
28
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "en", "dataset:codexglue", "arxiv:2002.08155", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en datasets: - codexglue --- # CodeBERT fine-tuned for Insecure Code Detection 💾⛔ [codebert-base](https://huggingface.co/microsoft/codebert-base) fine-tuned on [CodeXGLUE -- Defect Detection](https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Defect-detection) dataset for **Insecure Code Detection** downstream task. ## Details of [CodeBERT](https://arxiv.org/abs/2002.08155) We present CodeBERT, a bimodal pre-trained model for programming language (PL) and nat-ural language (NL). CodeBERT learns general-purpose representations that support downstream NL-PL applications such as natural language codesearch, code documentation generation, etc. We develop CodeBERT with Transformer-based neural architecture, and train it with a hybrid objective function that incorporates the pre-training task of replaced token detection, which is to detect plausible alternatives sampled from generators. This enables us to utilize both bimodal data of NL-PL pairs and unimodal data, where the former provides input tokens for model training while the latter helps to learn better generators. We evaluate CodeBERT on two NL-PL applications by fine-tuning model parameters. Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation tasks. Furthermore, to investigate what type of knowledge is learned in CodeBERT, we construct a dataset for NL-PL probing, and evaluate in a zero-shot setting where parameters of pre-trained models are fixed. Results show that CodeBERT performs better than previous pre-trained models on NL-PL probing. ## Details of the downstream task (code classification) - Dataset 📚 Given a source code, the task is to identify whether it is an insecure code that may attack software systems, such as resource leaks, use-after-free vulnerabilities and DoS attack. We treat the task as binary classification (0/1), where 1 stands for insecure code and 0 for secure code. The [dataset](https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Defect-detection) used comes from the paper [*Devign*: Effective Vulnerability Identification by Learning Comprehensive Program Semantics via Graph Neural Networks](http://papers.nips.cc/paper/9209-devign-effective-vulnerability-identification-by-learning-comprehensive-program-semantics-via-graph-neural-networks.pdf). All projects are combined and splitted 80%/10%/10% for training/dev/test. Data statistics of the dataset are shown in the below table: | | #Examples | | ----- | :-------: | | Train | 21,854 | | Dev | 2,732 | | Test | 2,732 | ## Test set metrics 🧾 | Methods | ACC | | -------- | :-------: | | BiLSTM | 59.37 | | TextCNN | 60.69 | | [RoBERTa](https://arxiv.org/pdf/1907.11692.pdf) | 61.05 | | [CodeBERT](https://arxiv.org/pdf/2002.08155.pdf) | 62.08 | | [Ours](https://huggingface.co/mrm8488/codebert-base-finetuned-detect-insecure-code) | **65.30** | ## Model in Action 🚀 ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch import numpy as np tokenizer = AutoTokenizer.from_pretrained('mrm8488/codebert-base-finetuned-detect-insecure-code') model = AutoModelForSequenceClassification.from_pretrained('mrm8488/codebert-base-finetuned-detect-insecure-code') inputs = tokenizer("your code here", return_tensors="pt", truncation=True, padding='max_length') labels = torch.tensor([1]).unsqueeze(0) # Batch size 1 outputs = model(**inputs, labels=labels) loss = outputs.loss logits = outputs.logits print(np.argmax(logits.detach().numpy())) ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/codeBERTaJS
mrm8488
2021-05-20T18:17:36Z
10
6
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "javascript", "code", "arxiv:1909.09436", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: code thumbnail: tags: - javascript - code widget: - text: "async function createUser(req, <mask>) { if (!validUser(req.body.user)) { return res.status(400); } user = userService.createUser(req.body.user); return res.json(user); }" --- # CodeBERTaJS CodeBERTaJS is a RoBERTa-like model trained on the [CodeSearchNet](https://github.blog/2019-09-26-introducing-the-codesearchnet-challenge/) dataset from GitHub for `javaScript` by [Manuel Romero](https://twitter.com/mrm8488) The **tokenizer** is a Byte-level BPE tokenizer trained on the corpus using Hugging Face `tokenizers`. Because it is trained on a corpus of code (vs. natural language), it encodes the corpus efficiently (the sequences are between 33% to 50% shorter, compared to the same corpus tokenized by gpt2/roberta). The (small) **model** is a 6-layer, 84M parameters, RoBERTa-like Transformer model – that’s the same number of layers & heads as DistilBERT – initialized from the default initialization settings and trained from scratch on the full `javascript` corpus (120M after preproccessing) for 2 epochs. ## Quick start: masked language modeling prediction ```python JS_CODE = """ async function createUser(req, <mask>) { if (!validUser(req.body.user)) { \t return res.status(400); } user = userService.createUser(req.body.user); return res.json(user); } """.lstrip() ``` ### Does the model know how to complete simple JS/express like code? ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="mrm8488/codeBERTaJS", tokenizer="mrm8488/codeBERTaJS" ) fill_mask(JS_CODE) ## Top 5 predictions: # 'res' # prob 0.069489665329 'next' 'req' 'user' ',req' ``` ### Yes! That was easy 🎉 Let's try with another example ```python JS_CODE_= """ function getKeys(obj) { keys = []; for (var [key, value] of Object.entries(obj)) { keys.push(<mask>); } return keys } """.lstrip() ``` Results: ```python 'obj', 'key', ' value', 'keys', 'i' ``` > Not so bad! Right token was predicted as second option! 🎉 ## This work is heavely inspired on [codeBERTa](https://github.com/huggingface/transformers/blob/master/model_cards/huggingface/CodeBERTa-small-v1/README.md) by huggingface team <br> ## CodeSearchNet citation <details> ```bibtex @article{husain_codesearchnet_2019, \ttitle = {{CodeSearchNet} {Challenge}: {Evaluating} the {State} of {Semantic} {Code} {Search}}, \tshorttitle = {{CodeSearchNet} {Challenge}}, \turl = {http://arxiv.org/abs/1909.09436}, \turldate = {2020-03-12}, \tjournal = {arXiv:1909.09436 [cs, stat]}, \tauthor = {Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc}, \tmonth = sep, \tyear = {2019}, \tnote = {arXiv: 1909.09436}, } ``` </details> > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/RuPERTa-base-finetuned-squadv1
mrm8488
2021-05-20T18:13:28Z
14
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "question-answering", "es", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: es datasets: - squad ---
mrm8488/RoBERTinha
mrm8488
2021-05-20T18:03:32Z
14
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "gl", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: gl widget: - text: "Galicia é unha <mask> autónoma española." - text: "A lingua oficial de Galicia é o <mask>." --- # RoBERTinha: RoBERTa-like Language model trained on OSCAR Galician corpus
mrm8488/CodeBERTaPy
mrm8488
2021-05-20T18:01:23Z
25
3
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "code", "arxiv:1909.09436", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: code thumbnail: --- # CodeBERTaPy CodeBERTaPy is a RoBERTa-like model trained on the [CodeSearchNet](https://github.blog/2019-09-26-introducing-the-codesearchnet-challenge/) dataset from GitHub for `python` by [Manuel Romero](https://twitter.com/mrm8488) The **tokenizer** is a Byte-level BPE tokenizer trained on the corpus using Hugging Face `tokenizers`. Because it is trained on a corpus of code (vs. natural language), it encodes the corpus efficiently (the sequences are between 33% to 50% shorter, compared to the same corpus tokenized by gpt2/roberta). The (small) **model** is a 6-layer, 84M parameters, RoBERTa-like Transformer model – that’s the same number of layers & heads as DistilBERT – initialized from the default initialization settings and trained from scratch on the full `python` corpus for 4 epochs. ## Quick start: masked language modeling prediction ```python PYTHON_CODE = """ fruits = ['apples', 'bananas', 'oranges'] for idx, <mask> in enumerate(fruits): print("index is %d and value is %s" % (idx, val)) """.lstrip() ``` ### Does the model know how to complete simple Python code? ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="mrm8488/CodeBERTaPy", tokenizer="mrm8488/CodeBERTaPy" ) fill_mask(PYTHON_CODE) ## Top 5 predictions: 'val' # prob 0.980728805065155 'value' 'idx' ',val' '_' ``` ### Yes! That was easy 🎉 Let's try with another Flask like example ```python PYTHON_CODE2 = """ @app.route('/<name>') def hello_name(name): return "Hello {}!".format(<mask>) if __name__ == '__main__': app.run() """.lstrip() fill_mask(PYTHON_CODE2) ## Top 5 predictions: 'name' # prob 0.9961813688278198 ' name' 'url' 'description' 'self' ``` ### Yeah! It works 🎉 Let's try with another Tensorflow/Keras like example ```python PYTHON_CODE3=""" model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.<mask>(128, activation='relu'), keras.layers.Dense(10, activation='softmax') ]) """.lstrip() fill_mask(PYTHON_CODE3) ## Top 5 predictions: 'Dense' # prob 0.4482928514480591 'relu' 'Flatten' 'Activation' 'Conv' ``` > Great! 🎉 ## This work is heavily inspired on [CodeBERTa](https://github.com/huggingface/transformers/blob/master/model_cards/huggingface/CodeBERTa-small-v1/README.md) by huggingface team <br> ## CodeSearchNet citation <details> ```bibtex @article{husain_codesearchnet_2019, title = {{CodeSearchNet} {Challenge}: {Evaluating} the {State} of {Semantic} {Code} {Search}}, shorttitle = {{CodeSearchNet} {Challenge}}, url = {http://arxiv.org/abs/1909.09436}, urldate = {2020-03-12}, journal = {arXiv:1909.09436 [cs, stat]}, author = {Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc}, month = sep, year = {2019}, note = {arXiv: 1909.09436}, } ``` </details> > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
julien-c/dummy-unknown
julien-c
2021-05-20T17:31:14Z
61,031
0
transformers
[ "transformers", "pytorch", "tf", "jax", "roberta", "fill-mask", "ci", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- tags: - ci --- ## Dummy model used for unit testing and CI ```python import json import os from transformers import RobertaConfig, RobertaForMaskedLM, TFRobertaForMaskedLM DIRNAME = "./dummy-unknown" config = RobertaConfig(10, 20, 1, 1, 40) model = RobertaForMaskedLM(config) model.save_pretrained(DIRNAME) tf_model = TFRobertaForMaskedLM.from_pretrained(DIRNAME, from_pt=True) tf_model.save_pretrained(DIRNAME) # Tokenizer: vocab = [ "l", "o", "w", "e", "r", "s", "t", "i", "d", "n", "\u0120", "\u0120l", "\u0120n", "\u0120lo", "\u0120low", "er", "\u0120lowest", "\u0120newer", "\u0120wider", "<unk>", ] vocab_tokens = dict(zip(vocab, range(len(vocab)))) merges = ["#version: 0.2", "\u0120 l", "\u0120l o", "\u0120lo w", "e r", ""] vocab_file = os.path.join(DIRNAME, "vocab.json") merges_file = os.path.join(DIRNAME, "merges.txt") with open(vocab_file, "w", encoding="utf-8") as fp: fp.write(json.dumps(vocab_tokens) + "\n") with open(merges_file, "w", encoding="utf-8") as fp: fp.write("\n".join(merges)) ```
jpcorb20/toxic-detector-distilroberta
jpcorb20
2021-05-20T17:25:58Z
88
1
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# Distilroberta for toxic comment detection See my GitHub repo [toxic-comment-server](https://github.com/jpcorb20/toxic-comment-server) The model was trained from [DistilRoberta](https://huggingface.co/distilroberta-base) on [Kaggle Toxic Comments](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge) with the BCEWithLogits loss for Multi-Label prediction. Thus, please use the sigmoid activation on the logits (not made to use the softmax output, e.g. like the HF widget). ## Evaluation F1 scores: toxic: 0.72 severe_toxic: 0.38 obscene: 0.72 threat: 0.52 insult: 0.69 identity_hate: 0.60 Macro-F1: 0.61
iarfmoose/roberta-small-bulgarian
iarfmoose
2021-05-20T16:54:01Z
6
0
transformers
[ "transformers", "pytorch", "tf", "jax", "roberta", "fill-mask", "bg", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: bg --- # RoBERTa-small-bulgarian The RoBERTa model was originally introduced in [this paper](https://arxiv.org/abs/1907.11692). This is a smaller version of [RoBERTa-base-bulgarian](https://huggingface.co/iarfmoose/roberta-small-bulgarian) with only 6 hidden layers, but similar performance. ## Intended uses This model can be used for cloze tasks (masked language modeling) or finetuned on other tasks in Bulgarian. ## Limitations and bias The training data is unfiltered text from the internet and may contain all sorts of biases. ## Training data This model was trained on the following data: - [bg_dedup from OSCAR](https://oscar-corpus.com/) - [Newscrawl 1 million sentences 2017 from Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download/bulgarian) - [Wikipedia 1 million sentences 2016 from Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download/bulgarian) ## Training procedure The model was pretrained using a masked language-modeling objective with dynamic masking as described [here](https://huggingface.co/roberta-base#preprocessing) It was trained for 160k steps. The batch size was limited to 8 due to GPU memory limitations.
iarfmoose/roberta-small-bulgarian-pos
iarfmoose
2021-05-20T16:52:10Z
4
1
transformers
[ "transformers", "pytorch", "tf", "jax", "roberta", "token-classification", "bg", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: bg --- # RoBERTa-small-bulgarian-POS The RoBERTa model was originally introduced in [this paper](https://arxiv.org/abs/1907.11692). This model is a version of [RoBERTa-small-Bulgarian](https://huggingface.co/iarfmoose/roberta-small-bulgarian) fine-tuned for part-of-speech tagging. ## Intended uses The model can be used to predict part-of-speech tags in Bulgarian text. Since the tokenizer uses byte-pair encoding, each word in the text may be split into more than one token. When predicting POS-tags, the last token from each word can be used. Using the last token was found to slightly outperform predictions based on the first token. An example of this can be found [here](https://github.com/iarfmoose/bulgarian-nlp/blob/master/models/postagger.py). ## Limitations and bias The pretraining data is unfiltered text from the internet and may contain all sorts of biases. ## Training data In addition to the pretraining data used in [RoBERTa-base-Bulgarian]([RoBERTa-base-Bulgarian](https://huggingface.co/iarfmoose/roberta-base-bulgarian)), the model was trained on the UPOS tags from (UD_Bulgarian-BTB)[https://github.com/UniversalDependencies/UD_Bulgarian-BTB]. ## Training procedure The model was trained for 5 epochs over the training set. The loss was calculated based on label predictions for the last POS-tag for each word. The model achieves 98% on the test set.
iarfmoose/roberta-base-bulgarian-pos
iarfmoose
2021-05-20T16:49:07Z
14
0
transformers
[ "transformers", "pytorch", "tf", "jax", "roberta", "token-classification", "bg", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: bg --- # RoBERTa-base-bulgarian-POS The RoBERTa model was originally introduced in [this paper](https://arxiv.org/abs/1907.11692). This model is a version of [RoBERTa-base-Bulgarian](https://huggingface.co/iarfmoose/roberta-base-bulgarian) fine-tuned for part-of-speech tagging. ## Intended uses The model can be used to predict part-of-speech tags in Bulgarian text. Since the tokenizer uses byte-pair encoding, each word in the text may be split into more than one token. When predicting POS-tags, the last token from each word can be used. Using the last token was found to slightly outperform predictions based on the first token. An example of this can be found [here](https://github.com/iarfmoose/bulgarian-nlp/blob/master/models/postagger.py). ## Limitations and bias The pretraining data is unfiltered text from the internet and may contain all sorts of biases. ## Training data In addition to the pretraining data used in [RoBERTa-base-Bulgarian]([RoBERTa-base-Bulgarian](https://huggingface.co/iarfmoose/roberta-base-bulgarian)), the model was trained on the UPOS tags from [UD_Bulgarian-BTB](https://github.com/UniversalDependencies/UD_Bulgarian-BTB). ## Training procedure The model was trained for 5 epochs over the training set. The loss was calculated based on label predictions for the last POS-tag for each word. The model achieves 97% on the test set.
ghanashyamvtatti/roberta-fake-news
ghanashyamvtatti
2021-05-20T16:33:04Z
11
3
transformers
[ "transformers", "pytorch", "tf", "jax", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
A fake news detector using RoBERTa. Dataset: https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset Training involved using hyperparameter search with 10 trials.
elgeish/cs224n-squad2.0-roberta-base
elgeish
2021-05-20T16:16:38Z
12
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "question-answering", "arxiv:2004.07067", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
## CS224n SQuAD2.0 Project Dataset The goal of this model is to save CS224n students GPU time when establishing baselines to beat for the [Default Final Project](http://web.stanford.edu/class/cs224n/project/default-final-project-handout.pdf). The training set used to fine-tune this model is the same as the [official one](https://rajpurkar.github.io/SQuAD-explorer/); however, evaluation and model selection were performed using roughly half of the official dev set, 6078 examples, picked at random. The data files can be found at <https://github.com/elgeish/squad/tree/master/data> — this is the Winter 2020 version. Given that the official SQuAD2.0 dev set contains the project's test set, students must make sure not to use the official SQuAD2.0 dev set in any way — including the use of models fine-tuned on the official SQuAD2.0, since they used the official SQuAD2.0 dev set for model selection. ## Results ```json { "exact": 75.32082922013821, "f1": 78.66699523704254, "total": 6078, "HasAns_exact": 74.84536082474227, "HasAns_f1": 81.83436324767868, "HasAns_total": 2910, "NoAns_exact": 75.75757575757575, "NoAns_f1": 75.75757575757575, "NoAns_total": 3168, "best_exact": 75.32082922013821, "best_exact_thresh": 0.0, "best_f1": 78.66699523704266, "best_f1_thresh": 0.0 } ``` ## Notable Arguments ```json { "do_lower_case": true, "doc_stride": 128, "fp16": false, "fp16_opt_level": "O1", "gradient_accumulation_steps": 24, "learning_rate": 3e-05, "max_answer_length": 30, "max_grad_norm": 1, "max_query_length": 64, "max_seq_length": 384, "model_name_or_path": "roberta-base", "model_type": "roberta", "num_train_epochs": 4, "per_gpu_train_batch_size": 16, "save_steps": 5000, "seed": 42, "train_batch_size": 16, "version_2_with_negative": true, "warmup_steps": 0, "weight_decay": 0 } ``` ## Environment Setup ```json { "transformers": "2.5.1", "pytorch": "1.4.0=py3.6_cuda10.1.243_cudnn7.6.3_0", "python": "3.6.5=hc3d631a_2", "os": "Linux 4.15.0-1060-aws #62-Ubuntu SMP Tue Feb 11 21:23:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux", "gpu": "Tesla V100-SXM2-16GB" } ``` ## How to Cite ```BibTeX @misc{elgeish2020gestalt, title={Gestalt: a Stacking Ensemble for SQuAD2.0}, author={Mohamed El-Geish}, journal={arXiv e-prints}, archivePrefix={arXiv}, eprint={2004.07067}, year={2020}, } ``` ## Related Models * [elgeish/cs224n-squad2.0-albert-base-v2](https://huggingface.co/elgeish/cs224n-squad2.0-albert-base-v2) * [elgeish/cs224n-squad2.0-albert-large-v2](https://huggingface.co/elgeish/cs224n-squad2.0-albert-large-v2) * [elgeish/cs224n-squad2.0-albert-xxlarge-v1](https://huggingface.co/elgeish/cs224n-squad2.0-albert-xxlarge-v1) * [elgeish/cs224n-squad2.0-distilbert-base-uncased](https://huggingface.co/elgeish/cs224n-squad2.0-distilbert-base-uncased)
dbernsohn/roberta-php
dbernsohn
2021-05-20T15:56:10Z
5
2
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# roberta-php --- language: php datasets: - code_search_net --- This is a [roberta](https://arxiv.org/pdf/1907.11692.pdf) pre-trained version on the [CodeSearchNet dataset](https://github.com/github/CodeSearchNet) for **php** Mask Language Model mission. To load the model: (necessary packages: !pip install transformers sentencepiece) ```python from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline tokenizer = AutoTokenizer.from_pretrained("dbernsohn/roberta-php") model = AutoModelWithLMHead.from_pretrained("dbernsohn/roberta-php") fill_mask = pipeline( "fill-mask", model=model, tokenizer=tokenizer ) ``` You can then use this model to fill masked words in a Java code. ```python code = """ $people = array( array('name' => 'Kalle', 'salt' => 856412), array('name' => 'Pierre', 'salt' => 215863) ); for($i = 0; $i < count($<mask>); ++$i) { $people[$i]['salt'] = mt_rand(000000, 999999); } """.lstrip() pred = {x["token_str"].replace("Ġ", ""): x["score"] for x in fill_mask(code)} sorted(pred.items(), key=lambda kv: kv[1], reverse=True) # [('people', 0.785636842250824), # ('parts', 0.006270722020417452), # ('id', 0.0035842324141412973), # ('data', 0.0025512021966278553), # ('config', 0.002258970635011792)] ``` The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/CodeMLM) > Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
dbernsohn/roberta-java
dbernsohn
2021-05-20T15:54:29Z
13
2
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# roberta-java --- language: Java datasets: - code_search_net --- This is a [roberta](https://arxiv.org/pdf/1907.11692.pdf) pre-trained version on the [CodeSearchNet dataset](https://github.com/github/CodeSearchNet) for **Java** Mask Language Model mission. To load the model: (necessary packages: !pip install transformers sentencepiece) ```python from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline tokenizer = AutoTokenizer.from_pretrained("dbernsohn/roberta-java") model = AutoModelWithLMHead.from_pretrained("dbernsohn/roberta-java") fill_mask = pipeline( "fill-mask", model=model, tokenizer=tokenizer ) ``` You can then use this model to fill masked words in a Java code. ```python code = """ String[] cars = {"Volvo", "BMW", "Ford", "Mazda"}; for (String i : cars) { System.out.<mask>(i); } """.lstrip() pred = {x["token_str"].replace("Ġ", ""): x["score"] for x in fill_mask(code)} sorted(pred.items(), key=lambda kv: kv[1], reverse=True) # [('println', 0.32571351528167725), # ('get', 0.2897663116455078), # ('remove', 0.0637081190943718), # ('exit', 0.058875661343336105), # ('print', 0.034190207719802856)] ``` The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/CodeMLM) > Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
dbernsohn/roberta-go
dbernsohn
2021-05-20T15:53:19Z
13
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# roberta-go --- language: Go datasets: - code_search_net --- This is a [roberta](https://arxiv.org/pdf/1907.11692.pdf) pre-trained version on the [CodeSearchNet dataset](https://github.com/github/CodeSearchNet) for **Golang** Mask Language Model mission. To load the model: (necessary packages: !pip install transformers sentencepiece) ```python from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline tokenizer = AutoTokenizer.from_pretrained("dbernsohn/roberta-go") model = AutoModelWithLMHead.from_pretrained("dbernsohn/roberta-go") fill_mask = pipeline( "fill-mask", model=model, tokenizer=tokenizer ) ``` You can then use this model to fill masked words in a Java code. ```python code = """ package main import ( "fmt" "runtime" ) func main() { fmt.Print("Go runs on ") switch os := runtime.<mask>; os { case "darwin": fmt.Println("OS X.") case "linux": fmt.Println("Linux.") default: // freebsd, openbsd, // plan9, windows... fmt.Printf("%s.\n", os) } } """.lstrip() pred = {x["token_str"].replace("Ġ", ""): x["score"] for x in fill_mask(code)} sorted(pred.items(), key=lambda kv: kv[1], reverse=True) [('GOOS', 0.11810332536697388), ('FileInfo', 0.04276798665523529), ('Stdout', 0.03572738170623779), ('Getenv', 0.025064032524824142), ('FileMode', 0.01462600938975811)] ``` The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/CodeMLM) > Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
clue/roberta_chinese_large
clue
2021-05-20T15:28:53Z
12
2
transformers
[ "transformers", "pytorch", "jax", "roberta", "zh", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: zh --- ## roberta_chinese_large ### Overview **Language model:** roberta-large **Model size:** 1.2G **Language:** Chinese **Training data:** [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020) **Eval data:** [CLUE dataset](https://github.com/CLUEbenchmark/CLUE) ### Results For results on downstream tasks like text classification, please refer to [this repository](https://github.com/CLUEbenchmark/CLUE). ### Usage **NOTE:** You have to call **BertTokenizer** instead of RobertaTokenizer !!! ``` import torch from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained("clue/roberta_chinese_large") roberta = BertModel.from_pretrained("clue/roberta_chinese_large") ``` ### About CLUE benchmark Organization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard. Github: https://github.com/CLUEbenchmark Website: https://www.cluebenchmarks.com/
clue/roberta_chinese_base
clue
2021-05-20T15:23:58Z
317
7
transformers
[ "transformers", "pytorch", "jax", "roberta", "zh", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: zh --- ## roberta_chinese_base ### Overview **Language model:** roberta-base **Model size:** 392M **Language:** Chinese **Training data:** [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020) **Eval data:** [CLUE dataset](https://github.com/CLUEbenchmark/CLUE) ### Results For results on downstream tasks like text classification, please refer to [this repository](https://github.com/CLUEbenchmark/CLUE). ### Usage **NOTE:** You have to call **BertTokenizer** instead of RobertaTokenizer !!! ``` import torch from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained("clue/roberta_chinese_base") roberta = BertModel.from_pretrained("clue/roberta_chinese_base") ``` ### About CLUE benchmark Organization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard. Github: https://github.com/CLUEbenchmark Website: https://www.cluebenchmarks.com/
clarin-pl/roberta-polish-kgr10
clarin-pl
2021-05-20T15:22:13Z
44
2
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# Work in Progress Polish RoBERTa The model has been trained for about 5% time of the target. We will publish new increments as they will be trained. The model pre-trained on KGR10 corpora. More about model at [CLARIN-dspace](https://huggingface.co/clarin/roberta-polish-v1) ## Usage ## Huggingface model hub ## Acknowledgments [CLARIN-PL and CLARIN-BIZ project](https://clarin-pl.eu/)
castorini/ance-msmarco-doc-firstp
castorini
2021-05-20T15:17:20Z
7
1
transformers
[ "transformers", "pytorch", "roberta", "arxiv:2007.00808", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
This model is converted from the original ANCE [repo](https://github.com/microsoft/ANCE) and fitted into Pyserini: > Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, Arnold Overwijk. [Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval](https://arxiv.org/pdf/2007.00808.pdf) For more details on how to use it, check our experiments in [Pyserini](https://github.com/castorini/pyserini/blob/master/docs/experiments-ance.md)
cahya/roberta-base-indonesian-522M
cahya
2021-05-20T14:41:00Z
338
6
transformers
[ "transformers", "pytorch", "tf", "jax", "roberta", "fill-mask", "id", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: "id" license: "mit" datasets: - Indonesian Wikipedia widget: - text: "Ibu ku sedang bekerja <mask> supermarket." --- # Indonesian RoBERTa base model (uncased) ## Model description It is RoBERTa-base model pre-trained with indonesian Wikipedia using a masked language modeling (MLM) objective. This model is uncased: it does not make a difference between indonesia and Indonesia. This is one of several other language models that have been pre-trained with indonesian datasets. More detail about its usage on downstream tasks (text classification, text generation, etc) is available at [Transformer based Indonesian Language Models](https://github.com/cahya-wirawan/indonesian-language-models/tree/master/Transformers) ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='cahya/roberta-base-indonesian-522M') >>> unmasker("Ibu ku sedang bekerja <mask> supermarket") ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel model_name='cahya/roberta-base-indonesian-522M' tokenizer = RobertaTokenizer.from_pretrained(model_name) model = RobertaModel.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in Tensorflow: ```python from transformers import RobertaTokenizer, TFRobertaModel model_name='cahya/roberta-base-indonesian-522M' tokenizer = RobertaTokenizer.from_pretrained(model_name) model = TFRobertaModel.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data This model was pre-trained with 522MB of indonesian Wikipedia. The texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are then of the form: ```<s> Sentence A </s> Sentence B </s>```
aychang/roberta-base-imdb
aychang
2021-05-20T14:25:56Z
1,446
5
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "en", "dataset:imdb", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en thumbnail: tags: - text-classification license: mit datasets: - imdb metrics: --- # IMDB Sentiment Task: roberta-base ## Model description A simple base roBERTa model trained on the "imdb" dataset. ## Intended uses & limitations #### How to use ##### Transformers ```python # Load model and tokenizer from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) # Use pipeline from transformers import pipeline model_name = "aychang/roberta-base-imdb" nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name) results = nlp(["I didn't really like it because it was so terrible.", "I love how easy it is to watch and get good results."]) ``` ##### AdaptNLP ```python from adaptnlp import EasySequenceClassifier model_name = "aychang/roberta-base-imdb" texts = ["I didn't really like it because it was so terrible.", "I love how easy it is to watch and get good results."] classifer = EasySequenceClassifier results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2) ``` #### Limitations and bias This is minimal language model trained on a benchmark dataset. ## Training data IMDB https://huggingface.co/datasets/imdb ## Training procedure #### Hardware One V100 #### Hyperparameters and Training Args ```python from transformers import TrainingArguments training_args = TrainingArguments( output_dir='./models', overwrite_output_dir=False, num_train_epochs=2, per_device_train_batch_size=8, per_device_eval_batch_size=8, warmup_steps=500, weight_decay=0.01, evaluation_strategy="steps", logging_dir='./logs', fp16=False, eval_steps=800, save_steps=300000 ) ``` ## Eval results ``` {'epoch': 2.0, 'eval_accuracy': 0.94668, 'eval_f1': array([0.94603457, 0.94731017]), 'eval_loss': 0.2578844428062439, 'eval_precision': array([0.95762642, 0.93624502]), 'eval_recall': array([0.93472, 0.95864]), 'eval_runtime': 244.7522, 'eval_samples_per_second': 102.144} ```
patrickvonplaten/bert-base-cased_fine_tuned_glue_mrpc_demo
patrickvonplaten
2021-05-20T14:17:38Z
6
0
transformers
[ "transformers", "jax", "bert", "text-classification", "en", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en license: apache-2.0 datasets: - glue --- # Bert-base-cased Fine Tuned Glue Mrpc Demo This checkpoint was initialized from the pre-trained checkpoint bert-base-cased and subsequently fine-tuned on GLUE task: mrpc using [this](https://colab.research.google.com/drive/162pW3wonGcMMrGxmA-jdxwy1rhqXd90x?usp=sharing) notebook. Training was conducted for 3 epochs, using a linear decaying learning rate of 2e-05, and a total batch size of 32. The model has a final training loss of 0.103 and a accuracy of 0.831.
aravind-812/roberta-train-json
aravind-812
2021-05-20T14:12:53Z
9
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- datasets: - squad widget: - text: "Which name is also used to describe the Amazon rainforest in English?" context: "The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species." - text: "How many square kilometers of rainforest is covered in the basin?" context: "The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species."
pchanda/pretrained-smiles-pubchem10m
pchanda
2021-05-20T13:01:15Z
729
1
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
model pretrained on 10m smiles from pubchem.
abhishek/autonlp-imdb_sentiment_classification-31154
abhishek
2021-05-20T12:46:38Z
6
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "autonlp", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 31154 ## Validation Metrics - Loss: 0.19292379915714264 - Accuracy: 0.9395 - Precision: 0.9569557080474111 - Recall: 0.9204 - AUC: 0.9851040399999998 - F1: 0.9383219492302988 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-imdb_sentiment_classification-31154 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-imdb_sentiment_classification-31154", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-imdb_sentiment_classification-31154", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
Naveen-k/KanBERTo
Naveen-k
2021-05-20T12:16:02Z
13
1
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "kn", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: kn --- # Welcome to KanBERTo (ಕನ್ಬರ್ಟೋ) ## Model Description > This is a small language model for [Kannada](https://en.wikipedia.org/wiki/Kannada) language with 1M data samples taken from [OSCAR page](https://traces1.inria.fr/oscar/files/compressed-orig/kn.txt.gz) ## Training params - **Dataset** - 1M data samples are used to train this model from OSCAR page(https://traces1.inria.fr/oscar/) eventhough data set is of 1.7 GB due to resource constraint to train I have picked only 1M data from the total 1.7GB data set. If you are interested in collaboration and have computational resources to train on you are most welcome to do so. - **Preprocessing** - ByteLevelBPETokenizer is used to tokenize the sentences at character level and vocabulary size is set to 52k as per standard values given by 🤗 - **Hyperparameters** - __ByteLevelBPETokenizer__ : vocabulary size = 52_000 and min_frequency = 2 __Trainer__ : num_train_epochs=12 - trained for 12 epochs per_gpu_train_batch_size=64 - batch size for the datasamples is 64 save_steps=10_000 - save model for every 10k steps save_total_limit=2 - save limit is set for 2 **Intended uses & limitations** this is for anyone who wants to make use of kannada language models for various tasks like language generation, translation and many more use cases. **Whatever else is helpful!** If you are intersted in collaboration feel free to reach me [Naveen](mailto:[email protected])
NTUYG/DeepSCC-RoBERTa
NTUYG
2021-05-20T12:15:05Z
22
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
## How to use ```python from simpletransformers.classification import ClassificationModel, ClassificationArgs name_file = ['bash', 'c', 'c#', 'c++','css', 'haskell', 'java', 'javascript', 'lua', 'objective-c', 'perl', 'php', 'python','r','ruby', 'scala', 'sql', 'swift', 'vb.net'] deep_scc_model_args = ClassificationArgs(num_train_epochs=10,max_seq_length=300,use_multiprocessing=False) deep_scc_model = ClassificationModel("roberta", "NTUYG/DeepSCC-RoBERTa", num_labels=19, args=deep_scc_model_args,use_cuda=True) code = ''' public static double getSimilarity(String phrase1, String phrase2) { return (getSC(phrase1, phrase2) + getSC(phrase2, phrase1)) / 2.0; }''' code = code.replace('\n',' ').replace('\r',' ') predictions, raw_outputs = model.predict([code]) predict = name_file[predictions[0]] print(predict) ```
LIAMF-USP/roberta-large-finetuned-race
LIAMF-USP
2021-05-20T12:08:36Z
33
11
transformers
[ "transformers", "pytorch", "tf", "jax", "roberta", "multiple-choice", "dataset:race", "license:mit", "endpoints_compatible", "region:us" ]
multiple-choice
2022-03-02T23:29:04Z
--- language: "english" license: "mit" datasets: - race metrics: - accuracy --- # Roberta Large Fine Tuned on RACE ## Model description This model is a fine-tuned model of Roberta-large applied on RACE #### How to use ```python import datasets from transformers import RobertaTokenizer from transformers import RobertaForMultipleChoice tokenizer = RobertaTokenizer.from_pretrained( "LIAMF-USP/roberta-large-finetuned-race") model = RobertaForMultipleChoice.from_pretrained( "LIAMF-USP/roberta-large-finetuned-race") dataset = datasets.load_dataset( "race", "all", split=["train", "validation", "test"], )training_examples = dataset[0] evaluation_examples = dataset[1] test_examples = dataset[2] example=training_examples[0] example_id = example["example_id"] question = example["question"] context = example["article"] options = example["options"] label_example = example["answer"] label_map = {label: i for i, label in enumerate(["A", "B", "C", "D"])} choices_inputs = [] for ending_idx, (_, ending) in enumerate( zip(context, options)): if question.find("_") != -1: # fill in the banks questions question_option = question.replace("_", ending) else: question_option = question + " " + ending inputs = tokenizer( context, question_option, add_special_tokens=True, max_length=MAX_SEQ_LENGTH, padding="max_length", truncation=True, return_overflowing_tokens=False, ) label = label_map[label_example] input_ids = [x["input_ids"] for x in choices_inputs] attention_mask = ( [x["attention_mask"] for x in choices_inputs] # as the senteces follow the same structure, #just one of them is necessary to check if "attention_mask" in choices_inputs[0] else None ) example_encoded = { "example_id": example_id, "input_ids": input_ids, "attention_mask": attention_mask, "label": label, } output = model(**example_encoded) ``` ## Training data The initial model was [roberta large model](https://huggingface.co/roberta-large) which was then fine-tuned on [RACE dataset](https://www.cs.cmu.edu/~glai1/data/race/) ## Training procedure It was necessary to preprocess the data with a method that is exemplified for a single instance in the _How to use_ section. The used hyperparameters were the following: | Hyperparameter | Value | |:----:|:----:| | adam_beta1 | 0.9 | | adam_beta2 | 0.98 | | adam_epsilon | 1.000e-8 | | eval_batch_size | 32 | | train_batch_size | 1 | | fp16 | True | | gradient_accumulation_steps | 16 | | learning_rate | 0.00001 | | warmup_steps | 1000 | | max_length | 512 | | epochs | 4 | ## Eval results: | Dataset Acc | Eval | All Test |High School Test |Middle School Test | |:----:|:----:|:----:|:----:|:----:| | | 85.2 | 84.9|83.5|88.0| **The model was trained with a Tesla V100-PCIE-16GB**
zanelim/singbert
zanelim
2021-05-20T09:38:41Z
6
4
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "pretraining", "singapore", "sg", "singlish", "malaysia", "ms", "manglish", "bert-base-uncased", "en", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en tags: - singapore - sg - singlish - malaysia - ms - manglish - bert-base-uncased license: mit datasets: - reddit singapore, malaysia - hardwarezone widget: - text: "kopi c siew [MASK]" - text: "die [MASK] must try" --- # Model name SingBert - Bert for Singlish (SG) and Manglish (MY). ## Model description [BERT base uncased](https://github.com/google-research/bert#pre-trained-models), with pre-training finetuned on [singlish](https://en.wikipedia.org/wiki/Singlish) and [manglish](https://en.wikipedia.org/wiki/Manglish) data. ## Intended uses & limitations #### How to use ```python >>> from transformers import pipeline >>> nlp = pipeline('fill-mask', model='zanelim/singbert') >>> nlp("kopi c siew [MASK]") [{'sequence': '[CLS] kopi c siew dai [SEP]', 'score': 0.5092713236808777, 'token': 18765, 'token_str': 'dai'}, {'sequence': '[CLS] kopi c siew mai [SEP]', 'score': 0.3515934646129608, 'token': 14736, 'token_str': 'mai'}, {'sequence': '[CLS] kopi c siew bao [SEP]', 'score': 0.05576375499367714, 'token': 25945, 'token_str': 'bao'}, {'sequence': '[CLS] kopi c siew. [SEP]', 'score': 0.006019321270287037, 'token': 1012, 'token_str': '.'}, {'sequence': '[CLS] kopi c siew sai [SEP]', 'score': 0.0038361591286957264, 'token': 18952, 'token_str': 'sai'}] >>> nlp("one teh c siew dai, and one kopi [MASK].") [{'sequence': '[CLS] one teh c siew dai, and one kopi c [SEP]', 'score': 0.6176503300666809, 'token': 1039, 'token_str': 'c'}, {'sequence': '[CLS] one teh c siew dai, and one kopi o [SEP]', 'score': 0.21094971895217896, 'token': 1051, 'token_str': 'o'}, {'sequence': '[CLS] one teh c siew dai, and one kopi. [SEP]', 'score': 0.13027705252170563, 'token': 1012, 'token_str': '.'}, {'sequence': '[CLS] one teh c siew dai, and one kopi! [SEP]', 'score': 0.004680239595472813, 'token': 999, 'token_str': '!'}, {'sequence': '[CLS] one teh c siew dai, and one kopi w [SEP]', 'score': 0.002034128177911043, 'token': 1059, 'token_str': 'w'}] >>> nlp("dont play [MASK] leh") [{'sequence': '[CLS] dont play play leh [SEP]', 'score': 0.9281464219093323, 'token': 2377, 'token_str': 'play'}, {'sequence': '[CLS] dont play politics leh [SEP]', 'score': 0.010990909300744534, 'token': 4331, 'token_str': 'politics'}, {'sequence': '[CLS] dont play punk leh [SEP]', 'score': 0.005583590362221003, 'token': 7196, 'token_str': 'punk'}, {'sequence': '[CLS] dont play dirty leh [SEP]', 'score': 0.0025784350000321865, 'token': 6530, 'token_str': 'dirty'}, {'sequence': '[CLS] dont play cheat leh [SEP]', 'score': 0.0025066907983273268, 'token': 21910, 'token_str': 'cheat'}] >>> nlp("catch no [MASK]") [{'sequence': '[CLS] catch no ball [SEP]', 'score': 0.7922210693359375, 'token': 3608, 'token_str': 'ball'}, {'sequence': '[CLS] catch no balls [SEP]', 'score': 0.20503675937652588, 'token': 7395, 'token_str': 'balls'}, {'sequence': '[CLS] catch no tail [SEP]', 'score': 0.0006608376861549914, 'token': 5725, 'token_str': 'tail'}, {'sequence': '[CLS] catch no talent [SEP]', 'score': 0.0002158183924620971, 'token': 5848, 'token_str': 'talent'}, {'sequence': '[CLS] catch no prisoners [SEP]', 'score': 5.3481446229852736e-05, 'token': 5895, 'token_str': 'prisoners'}] >>> nlp("confirm plus [MASK]") [{'sequence': '[CLS] confirm plus chop [SEP]', 'score': 0.992355227470398, 'token': 24494, 'token_str': 'chop'}, {'sequence': '[CLS] confirm plus one [SEP]', 'score': 0.0037301010452210903, 'token': 2028, 'token_str': 'one'}, {'sequence': '[CLS] confirm plus minus [SEP]', 'score': 0.0014284878270700574, 'token': 15718, 'token_str': 'minus'}, {'sequence': '[CLS] confirm plus 1 [SEP]', 'score': 0.0011354683665558696, 'token': 1015, 'token_str': '1'}, {'sequence': '[CLS] confirm plus chopped [SEP]', 'score': 0.0003804611915256828, 'token': 24881, 'token_str': 'chopped'}] >>> nlp("die [MASK] must try") [{'sequence': '[CLS] die die must try [SEP]', 'score': 0.9552758932113647, 'token': 3280, 'token_str': 'die'}, {'sequence': '[CLS] die also must try [SEP]', 'score': 0.03644804656505585, 'token': 2036, 'token_str': 'also'}, {'sequence': '[CLS] die liao must try [SEP]', 'score': 0.003282855963334441, 'token': 727, 'token_str': 'liao'}, {'sequence': '[CLS] die already must try [SEP]', 'score': 0.0004937972989864647, 'token': 2525, 'token_str': 'already'}, {'sequence': '[CLS] die hard must try [SEP]', 'score': 0.0003659659414552152, 'token': 2524, 'token_str': 'hard'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('zanelim/singbert') model = BertModel.from_pretrained("zanelim/singbert") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained("zanelim/singbert") model = TFBertModel.from_pretrained("zanelim/singbert") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` #### Limitations and bias This model was finetuned on colloquial Singlish and Manglish corpus, hence it is best applied on downstream tasks involving the main constituent languages- english, mandarin, malay. Also, as the training data is mainly from forums, beware of existing inherent bias. ## Training data Colloquial singlish and manglish (both are a mixture of English, Mandarin, Tamil, Malay, and other local dialects like Hokkien, Cantonese or Teochew) corpus. The corpus is collected from subreddits- `r/singapore` and `r/malaysia`, and forums such as `hardwarezone`. ## Training procedure Initialized with [bert base uncased](https://github.com/google-research/bert#pre-trained-models) vocab and checkpoints (pre-trained weights). Top 1000 custom vocab tokens (non-overlapped with original bert vocab) were further extracted from training data and filled into unused tokens in original bert vocab. Pre-training was further finetuned on training data with the following hyperparameters * train_batch_size: 512 * max_seq_length: 128 * num_train_steps: 300000 * num_warmup_steps: 5000 * learning_rate: 2e-5 * hardware: TPU v3-8
zanelim/singbert-large-sg
zanelim
2021-05-20T09:36:17Z
9
4
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "pretraining", "singapore", "sg", "singlish", "malaysia", "ms", "manglish", "bert-large-uncased", "en", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en tags: - singapore - sg - singlish - malaysia - ms - manglish - bert-large-uncased license: mit datasets: - reddit singapore, malaysia - hardwarezone widget: - text: "kopi c siew [MASK]" - text: "die [MASK] must try" --- # Model name SingBert Large - Bert for Singlish (SG) and Manglish (MY). ## Model description Similar to [SingBert](https://huggingface.co/zanelim/singbert) but the large version, which was initialized from [BERT large uncased (whole word masking)](https://github.com/google-research/bert#pre-trained-models), with pre-training finetuned on [singlish](https://en.wikipedia.org/wiki/Singlish) and [manglish](https://en.wikipedia.org/wiki/Manglish) data. ## Intended uses & limitations #### How to use ```python >>> from transformers import pipeline >>> nlp = pipeline('fill-mask', model='zanelim/singbert-large-sg') >>> nlp("kopi c siew [MASK]") [{'sequence': '[CLS] kopi c siew dai [SEP]', 'score': 0.9003700017929077, 'token': 18765, 'token_str': 'dai'}, {'sequence': '[CLS] kopi c siew mai [SEP]', 'score': 0.0779474675655365, 'token': 14736, 'token_str': 'mai'}, {'sequence': '[CLS] kopi c siew. [SEP]', 'score': 0.0032227332703769207, 'token': 1012, 'token_str': '.'}, {'sequence': '[CLS] kopi c siew bao [SEP]', 'score': 0.0017727474914863706, 'token': 25945, 'token_str': 'bao'}, {'sequence': '[CLS] kopi c siew peng [SEP]', 'score': 0.0012526646023616195, 'token': 26473, 'token_str': 'peng'}] >>> nlp("one teh c siew dai, and one kopi [MASK]") [{'sequence': '[CLS] one teh c siew dai, and one kopi. [SEP]', 'score': 0.5249741077423096, 'token': 1012, 'token_str': '.'}, {'sequence': '[CLS] one teh c siew dai, and one kopi o [SEP]', 'score': 0.27349168062210083, 'token': 1051, 'token_str': 'o'}, {'sequence': '[CLS] one teh c siew dai, and one kopi peng [SEP]', 'score': 0.057190295308828354, 'token': 26473, 'token_str': 'peng'}, {'sequence': '[CLS] one teh c siew dai, and one kopi c [SEP]', 'score': 0.04022320732474327, 'token': 1039, 'token_str': 'c'}, {'sequence': '[CLS] one teh c siew dai, and one kopi? [SEP]', 'score': 0.01191170234233141, 'token': 1029, 'token_str': '?'}] >>> nlp("die [MASK] must try") [{'sequence': '[CLS] die die must try [SEP]', 'score': 0.9921030402183533, 'token': 3280, 'token_str': 'die'}, {'sequence': '[CLS] die also must try [SEP]', 'score': 0.004993876442313194, 'token': 2036, 'token_str': 'also'}, {'sequence': '[CLS] die liao must try [SEP]', 'score': 0.000317625846946612, 'token': 727, 'token_str': 'liao'}, {'sequence': '[CLS] die still must try [SEP]', 'score': 0.0002260878391098231, 'token': 2145, 'token_str': 'still'}, {'sequence': '[CLS] die i must try [SEP]', 'score': 0.00016935862367972732, 'token': 1045, 'token_str': 'i'}] >>> nlp("dont play [MASK] leh") [{'sequence': '[CLS] dont play play leh [SEP]', 'score': 0.9079819321632385, 'token': 2377, 'token_str': 'play'}, {'sequence': '[CLS] dont play punk leh [SEP]', 'score': 0.006846973206847906, 'token': 7196, 'token_str': 'punk'}, {'sequence': '[CLS] dont play games leh [SEP]', 'score': 0.004041737411171198, 'token': 2399, 'token_str': 'games'}, {'sequence': '[CLS] dont play politics leh [SEP]', 'score': 0.003728888463228941, 'token': 4331, 'token_str': 'politics'}, {'sequence': '[CLS] dont play cheat leh [SEP]', 'score': 0.0032805048394948244, 'token': 21910, 'token_str': 'cheat'}] >>> nlp("confirm plus [MASK]") {'sequence': '[CLS] confirm plus chop [SEP]', 'score': 0.9749826192855835, 'token': 24494, 'token_str': 'chop'}, {'sequence': '[CLS] confirm plus chopped [SEP]', 'score': 0.017554156482219696, 'token': 24881, 'token_str': 'chopped'}, {'sequence': '[CLS] confirm plus minus [SEP]', 'score': 0.002725469646975398, 'token': 15718, 'token_str': 'minus'}, {'sequence': '[CLS] confirm plus guarantee [SEP]', 'score': 0.000900257145985961, 'token': 11302, 'token_str': 'guarantee'}, {'sequence': '[CLS] confirm plus one [SEP]', 'score': 0.0004384620988275856, 'token': 2028, 'token_str': 'one'}] >>> nlp("catch no [MASK]") [{'sequence': '[CLS] catch no ball [SEP]', 'score': 0.9381157159805298, 'token': 3608, 'token_str': 'ball'}, {'sequence': '[CLS] catch no balls [SEP]', 'score': 0.060842301696538925, 'token': 7395, 'token_str': 'balls'}, {'sequence': '[CLS] catch no fish [SEP]', 'score': 0.00030917322146706283, 'token': 3869, 'token_str': 'fish'}, {'sequence': '[CLS] catch no breath [SEP]', 'score': 7.552534952992573e-05, 'token': 3052, 'token_str': 'breath'}, {'sequence': '[CLS] catch no tail [SEP]', 'score': 4.208395694149658e-05, 'token': 5725, 'token_str': 'tail'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('zanelim/singbert-large-sg') model = BertModel.from_pretrained("zanelim/singbert-large-sg") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained("zanelim/singbert-large-sg") model = TFBertModel.from_pretrained("zanelim/singbert-large-sg") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` #### Limitations and bias This model was finetuned on colloquial Singlish and Manglish corpus, hence it is best applied on downstream tasks involving the main constituent languages- english, mandarin, malay. Also, as the training data is mainly from forums, beware of existing inherent bias. ## Training data Colloquial singlish and manglish (both are a mixture of English, Mandarin, Tamil, Malay, and other local dialects like Hokkien, Cantonese or Teochew) corpus. The corpus is collected from subreddits- `r/singapore` and `r/malaysia`, and forums such as `hardwarezone`. ## Training procedure Initialized with [bert large uncased (whole word masking)](https://github.com/google-research/bert#pre-trained-models) vocab and checkpoints (pre-trained weights). Top 1000 custom vocab tokens (non-overlapped with original bert vocab) were further extracted from training data and filled into unused tokens in original bert vocab. Pre-training was further finetuned on training data with the following hyperparameters * train_batch_size: 512 * max_seq_length: 128 * num_train_steps: 300000 * num_warmup_steps: 5000 * learning_rate: 2e-5 * hardware: TPU v3-8
ykacer/bert-base-cased-imdb-sequence-classification
ykacer
2021-05-20T09:31:37Z
6
0
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "sequence", "classification", "en", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en thumbnail: https://raw.githubusercontent.com/JetRunner/BERT-of-Theseus/master/bert-of-theseus.png tags: - sequence - classification license: apache-2.0 datasets: - imdb metrics: - accuracy ---
vespa-engine/colbert-medium
vespa-engine
2021-05-20T08:59:43Z
8
3
transformers
[ "transformers", "pytorch", "bert", "arxiv:2004.12832", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# MS Marco Ranking with ColBERT on Vespa.ai Model is based on [ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT](https://arxiv.org/abs/2004.12832). This BERT model is based on [google/bert_uncased_L-8_H-512_A-8](https://huggingface.co/google/bert_uncased_L-8_H-512_A-8) and trained using the original [ColBERT training routine](https://github.com/stanford-futuredata/ColBERT/). The model weights have been tuned by training using the `triples.train.small.tar.gz from` [MSMARCO-Passage-Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking). To use this model with vespa.ai for MS Marco Passage Ranking, see [MS Marco Ranking using Vespa.ai sample app](https://github.com/vespa-engine/sample-apps/tree/master/msmarco-ranking). # MS Marco Passage Ranking | MS Marco Passage Ranking Query Set | MRR@10 ColBERT on Vespa.ai | |------------------------------------|----------------| | Dev | 0.354 | | Eval | 0.347 | The official baseline BM25 ranking model MRR@10 0.16 on eval and 0.167 on dev question set. See [MS Marco Passage Ranking Leaderboard](https://microsoft.github.io/msmarco/). ## Export ColBERT query encoder to ONNX We represent the ColBERT query encoder in the Vespa runtime, to map the textual query representation to the tensor representation. For this we use Vespa's support for running ONNX models. One can use the following snippet to export the model for serving. ```python from transformers import BertModel from transformers import BertPreTrainedModel from transformers import BertConfig import torch import torch.nn as nn class VespaColBERT(BertPreTrainedModel): def __init__(self,config): super().__init__(config) self.bert = BertModel(config) self.linear = nn.Linear(config.hidden_size, 32, bias=False) self.init_weights() def forward(self, input_ids, attention_mask): Q = self.bert(input_ids,attention_mask=attention_mask)[0] Q = self.linear(Q) return torch.nn.functional.normalize(Q, p=2, dim=2) colbert_query_encoder = VespaColBERT.from_pretrained("vespa-engine/colbert-medium") #Export model to ONNX for serving in Vespa input_names = ["input_ids", "attention_mask"] output_names = ["contextual"] #input, max 32 query term input_ids = torch.ones(1,32, dtype=torch.int64) attention_mask = torch.ones(1,32,dtype=torch.int64) args = (input_ids, attention_mask) torch.onnx.export(colbert_query_encoder, args=args, f="query_encoder_colbert.onnx", input_names = input_names, output_names = output_names, dynamic_axes = { "input_ids": {0: "batch"}, "attention_mask": {0: "batch"}, "contextual": {0: "batch"}, }, opset_version=11) ``` # Representing the model on Vespa.ai See [Ranking with ONNX models](https://docs.vespa.ai/documentation/onnx.html) and [MS Marco Ranking sample app](https://github.com/vespa-engine/sample-apps/tree/master/msmarco-ranking)
tugstugi/bert-large-mongolian-cased
tugstugi
2021-05-20T08:16:24Z
28
0
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "mongolian", "cased", "mn", "arxiv:1810.04805", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: "mn" tags: - bert - mongolian - cased --- # BERT-LARGE-MONGOLIAN-CASED [Link to Official Mongolian-BERT repo](https://github.com/tugstugi/mongolian-bert) ## Model description This repository contains pre-trained Mongolian [BERT](https://arxiv.org/abs/1810.04805) models trained by [tugstugi](https://github.com/tugstugi), [enod](https://github.com/enod) and [sharavsambuu](https://github.com/sharavsambuu). Special thanks to [nabar](https://github.com/nabar) who provided 5x TPUs. This repository is based on the following open source projects: [google-research/bert](https://github.com/google-research/bert/), [huggingface/pytorch-pretrained-BERT](https://github.com/huggingface/pytorch-pretrained-BERT) and [yoheikikuta/bert-japanese](https://github.com/yoheikikuta/bert-japanese). #### How to use ```python from transformers import pipeline, AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained('tugstugi/bert-large-mongolian-cased', use_fast=False) model = AutoModelForMaskedLM.from_pretrained('tugstugi/bert-large-mongolian-cased') ## declare task ## pipe = pipeline(task="fill-mask", model=model, tokenizer=tokenizer) ## example ## input_ = 'Монгол улсын [MASK] Улаанбаатар хотоос ярьж байна.' output_ = pipe(input_) for i in range(len(output_)): print(output_[i]) ## output ## # {'sequence': 'Монгол улсын нийслэл Улаанбаатар хотоос ярьж байна.', 'score': 0.9779232740402222, 'token': 1176, 'token_str': 'нийслэл'} # {'sequence': 'Монгол улсын Нийслэл Улаанбаатар хотоос ярьж байна.', 'score': 0.015034765936434269, 'token': 4059, 'token_str': 'Нийслэл'} # {'sequence': 'Монгол улсын Ерөнхийлөгч Улаанбаатар хотоос ярьж байна.', 'score': 0.0021413620561361313, 'token': 325, 'token_str': 'Ерөнхийлөгч'} # {'sequence': 'Монгол улсын ерөнхийлөгч Улаанбаатар хотоос ярьж байна.', 'score': 0.0008035294013097882, 'token': 1215, 'token_str': 'ерөнхийлөгч'} # {'sequence': 'Монгол улсын нийслэлийн Улаанбаатар хотоос ярьж байна.', 'score': 0.0006434018723666668, 'token': 356, 'token_str': 'нийслэлийн'} ``` ## Training data Mongolian Wikipedia and the 700 million word Mongolian news data set [[Pretraining Procedure](https://github.com/tugstugi/mongolian-bert#pre-training)] ### BibTeX entry and citation info ```bibtex @misc{mongolian-bert, author = {Tuguldur, Erdene-Ochir and Gunchinish, Sharavsambuu and Bataa, Enkhbold}, title = {BERT Pretrained Models on Mongolian Datasets}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/tugstugi/mongolian-bert/}} } ```
tugstugi/bert-base-mongolian-uncased
tugstugi
2021-05-20T08:13:09Z
30
2
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "mongolian", "uncased", "mn", "arxiv:1810.04805", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: "mn" tags: - bert - mongolian - uncased --- # BERT-BASE-MONGOLIAN-UNCASED [Link to Official Mongolian-BERT repo](https://github.com/tugstugi/mongolian-bert) ## Model description This repository contains pre-trained Mongolian [BERT](https://arxiv.org/abs/1810.04805) models trained by [tugstugi](https://github.com/tugstugi), [enod](https://github.com/enod) and [sharavsambuu](https://github.com/sharavsambuu). Special thanks to [nabar](https://github.com/nabar) who provided 5x TPUs. This repository is based on the following open source projects: [google-research/bert](https://github.com/google-research/bert/), [huggingface/pytorch-pretrained-BERT](https://github.com/huggingface/pytorch-pretrained-BERT) and [yoheikikuta/bert-japanese](https://github.com/yoheikikuta/bert-japanese). #### How to use ```python from transformers import pipeline, AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained('tugstugi/bert-base-mongolian-uncased', use_fast=False) model = AutoModelForMaskedLM.from_pretrained('tugstugi/bert-base-mongolian-uncased') ## declare task ## pipe = pipeline(task="fill-mask", model=model, tokenizer=tokenizer) ## example ## input_ = 'Миний [MASK] хоол идэх нь тун чухал.' output_ = pipe(input_) for i in range(len(output_)): print(output_[i]) ## output ## #{'sequence': 'миний хувьд хоол идэх нь тун чухал.', 'score': 0.7889143824577332, 'token': 126, 'token_str': 'хувьд'} #{'sequence': 'миний бодлоор хоол идэх нь тун чухал.', 'score': 0.18616807460784912, 'token': 6106, 'token_str': 'бодлоор'} #{'sequence': 'миний зүгээс хоол идэх нь тун чухал.', 'score': 0.004825591575354338, 'token': 761, 'token_str': 'зүгээс'} #{'sequence': 'миний биед хоол идэх нь тун чухал.', 'score': 0.0015743684489279985, 'token': 3010, 'token_str': 'биед'} #{'sequence': 'миний тухайд хоол идэх нь тун чухал.', 'score': 0.0014919431414455175, 'token': 1712, 'token_str': 'тухайд'} ``` ## Training data Mongolian Wikipedia and the 700 million word Mongolian news data set [[Pretraining Procedure](https://github.com/tugstugi/mongolian-bert#pre-training)] ### BibTeX entry and citation info ```bibtex @misc{mongolian-bert, author = {Tuguldur, Erdene-Ochir and Gunchinish, Sharavsambuu and Bataa, Enkhbold}, title = {BERT Pretrained Models on Mongolian Datasets}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/tugstugi/mongolian-bert/}} } ```
tugstugi/bert-base-mongolian-cased
tugstugi
2021-05-20T08:12:07Z
118
0
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "mongolian", "cased", "mn", "arxiv:1810.04805", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: "mn" tags: - bert - mongolian - cased --- # BERT-BASE-MONGOLIAN-CASED [Link to Official Mongolian-BERT repo](https://github.com/tugstugi/mongolian-bert) ## Model description This repository contains pre-trained Mongolian [BERT](https://arxiv.org/abs/1810.04805) models trained by [tugstugi](https://github.com/tugstugi), [enod](https://github.com/enod) and [sharavsambuu](https://github.com/sharavsambuu). Special thanks to [nabar](https://github.com/nabar) who provided 5x TPUs. This repository is based on the following open source projects: [google-research/bert](https://github.com/google-research/bert/), [huggingface/pytorch-pretrained-BERT](https://github.com/huggingface/pytorch-pretrained-BERT) and [yoheikikuta/bert-japanese](https://github.com/yoheikikuta/bert-japanese). #### How to use ```python from transformers import pipeline, AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained('tugstugi/bert-base-mongolian-cased', use_fast=False) model = AutoModelForMaskedLM.from_pretrained('tugstugi/bert-base-mongolian-cased') ## declare task ## pipe = pipeline(task="fill-mask", model=model, tokenizer=tokenizer) ## example ## input_ = '[MASK] хот Монгол улсын нийслэл.' output_ = pipe(input_) for i in range(len(output_)): print(output_[i]) ## output ## # {'sequence': 'Улаанбаатар хот Монгол улсын нийслэл.', 'score': 0.826970100402832, 'token': 281, 'token_str': 'Улаанбаатар'} # {'sequence': 'Нийслэл хот Монгол улсын нийслэл.', 'score': 0.06551621109247208, 'token': 4059, 'token_str': 'Нийслэл'} # {'sequence': 'Эрдэнэт хот Монгол улсын нийслэл.', 'score': 0.0264141745865345, 'token': 2229, 'token_str': 'Эрдэнэт'} # {'sequence': 'Дархан хот Монгол улсын нийслэл.', 'score': 0.017083868384361267, 'token': 1646, 'token_str': 'Дархан'} # {'sequence': 'УБ хот Монгол улсын нийслэл.', 'score': 0.010854342952370644, 'token': 7389, 'token_str': 'УБ'} ``` ## Training data Mongolian Wikipedia and the 700 million word Mongolian news data set [[Pretraining Procedure](https://github.com/tugstugi/mongolian-bert#pre-training)] ### BibTeX entry and citation info ```bibtex @misc{mongolian-bert, author = {Tuguldur, Erdene-Ochir and Gunchinish, Sharavsambuu and Bataa, Enkhbold}, title = {BERT Pretrained Models on Mongolian Datasets}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/tugstugi/mongolian-bert/}} } ```
trueto/medbert-kd-chinese
trueto
2021-05-20T08:10:57Z
9
10
transformers
[ "transformers", "pytorch", "jax", "bert", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# [medbert](https://github.com/trueto/medbert) 本项目开源硕士毕业论文“BERT模型在中文临床自然语言处理中的应用探索与研究”相关模型 ## 评估基准 构建了中文电子病历命名实体识别数据集(CEMRNER)、中文医学文本命名实体识别数据集(CMTNER)、 中文医学问句-问句识别数据集(CMedQQ)和中文临床文本分类数据集(CCTC)。 | **数据集** | **训练集** | **验证集** | **测试集** | **任务类型** | **语料来源** | | ---- | ---- | ---- |---- |---- |:----:| | CEMRNER | 965 | 138 | 276 | 命名实体识别 | 医渡云 | | CMTNER | 14000 | 2000 | 4000 | 命名实体识别 | CHIP2020 | | CMedQQ | 14000 | 2000 | 4000 | 句对识别 | 平安医疗 | | CCTC | 26837 | 3834 | 7669 | 句子分类 | CHIP2019 | ## 开源模型 在6.5亿字符中文临床自然语言文本语料上基于BERT模型和Albert模型预训练获得了MedBERT和MedAlbert模型。 ## 性能表现 在同等实验环境,相同训练参数和脚本下,各模型的性能表现 | **模型** | **CEMRNER** | **CMTNER** | **CMedQQ** | **CCTC** | | :---- | :----: | :----: | :----: | :----: | | [BERT](https://huggingface.co/bert-base-chinese) | 81.17% | 65.67% | 87.77% | 81.62% | | [MC-BERT](https://github.com/alibaba-research/ChineseBLUE) | 80.93% | 66.15% | 89.04% | 80.65% | | [PCL-BERT](https://code.ihub.org.cn/projects/1775) | 81.58% | 67.02% | 88.81% | 80.27% | | MedBERT | 82.29% | 66.49% | 88.32% | **81.77%** | |MedBERT-wwm| **82.60%** | 67.11% | 88.02% | 81.72% | |MedBERT-kd | 82.58% | **67.27%** | **89.34%** | 80.73% | |- | - | - | - | - | | [Albert](https://huggingface.co/voidful/albert_chinese_base) | 79.98% | 62.42% | 86.81% | 79.83% | | MedAlbert | 81.03% | 63.81% | 87.56% | 80.05% | |MedAlbert-wwm| **81.28%** | **64.12%** | **87.71%** | **80.46%** | ## 引用格式 ``` 杨飞洪,王序文,李姣.BERT模型在中文临床自然语言处理中的应用探索与研究[EB/OL].https://github.com/trueto/medbert, 2021-03. ```
trueto/medbert-base-wwm-chinese
trueto
2021-05-20T08:09:44Z
8
9
transformers
[ "transformers", "pytorch", "jax", "bert", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# [medbert](https://github.com/trueto/medbert) 本项目开源硕士毕业论文“BERT模型在中文临床自然语言处理中的应用探索与研究”相关模型 ## 评估基准 构建了中文电子病历命名实体识别数据集(CEMRNER)、中文医学文本命名实体识别数据集(CMTNER)、 中文医学问句-问句识别数据集(CMedQQ)和中文临床文本分类数据集(CCTC)。 | **数据集** | **训练集** | **验证集** | **测试集** | **任务类型** | **语料来源** | | ---- | ---- | ---- |---- |---- |:----:| | CEMRNER | 965 | 138 | 276 | 命名实体识别 | 医渡云 | | CMTNER | 14000 | 2000 | 4000 | 命名实体识别 | CHIP2020 | | CMedQQ | 14000 | 2000 | 4000 | 句对识别 | 平安医疗 | | CCTC | 26837 | 3834 | 7669 | 句子分类 | CHIP2019 | ## 开源模型 在6.5亿字符中文临床自然语言文本语料上基于BERT模型和Albert模型预训练获得了MedBERT和MedAlbert模型。 ## 性能表现 在同等实验环境,相同训练参数和脚本下,各模型的性能表现 | **模型** | **CEMRNER** | **CMTNER** | **CMedQQ** | **CCTC** | | :---- | :----: | :----: | :----: | :----: | | [BERT](https://huggingface.co/bert-base-chinese) | 81.17% | 65.67% | 87.77% | 81.62% | | [MC-BERT](https://github.com/alibaba-research/ChineseBLUE) | 80.93% | 66.15% | 89.04% | 80.65% | | [PCL-BERT](https://code.ihub.org.cn/projects/1775) | 81.58% | 67.02% | 88.81% | 80.27% | | MedBERT | 82.29% | 66.49% | 88.32% | **81.77%** | |MedBERT-wwm| **82.60%** | 67.11% | 88.02% | 81.72% | |MedBERT-kd | 82.58% | **67.27%** | **89.34%** | 80.73% | |- | - | - | - | - | | [Albert](https://huggingface.co/voidful/albert_chinese_base) | 79.98% | 62.42% | 86.81% | 79.83% | | MedAlbert | 81.03% | 63.81% | 87.56% | 80.05% | |MedAlbert-wwm| **81.28%** | **64.12%** | **87.71%** | **80.46%** | ## 引用格式 ``` 杨飞洪,王序文,李姣.BERT模型在中文临床自然语言处理中的应用探索与研究[EB/OL].https://github.com/trueto/medbert, 2021-03. ```
trtd56/autonlp-wrime_joy_only-117396
trtd56
2021-05-20T08:07:48Z
4
1
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "autonlp", "ja", "dataset:trtd56/autonlp-data-wrime_joy_only", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: ja widget: - text: "I love AutoNLP 🤗" datasets: - trtd56/autonlp-data-wrime_joy_only --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 117396 ## Validation Metrics - Loss: 0.4094310998916626 - Accuracy: 0.8201678240740741 - Precision: 0.6750303520841765 - Recall: 0.7912713472485768 - AUC: 0.8927167943538512 - F1: 0.728543350076436 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/trtd56/autonlp-wrime_joy_only-117396 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("trtd56/autonlp-wrime_joy_only-117396", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("trtd56/autonlp-wrime_joy_only-117396", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
trituenhantaoio/bert-base-vietnamese-diacritics-uncased
trituenhantaoio
2021-05-20T08:05:47Z
6
0
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
## Usage ```python from transformers import BertForSequenceClassification from transformers import BertTokenizer model = BertForSequenceClassification.from_pretrained("trituenhantaoio/bert-base-vietnamese-diacritics-uncased") tokenizer = BertTokenizer.from_pretrained("trituenhantaoio/bert-base-vietnamese-diacritics-uncased") ``` ### References ``` @article{ttnt2020bertdiacritics, title={Vietnamese BERT Diacritics: Pretrained on News and Wiki}, author={trituenhantao.io}, year = {2020}, publisher = {Hugging Face}, journal = {Hugging Face repository} } ``` [trituenhantao.io](https://trituenhantao.io)
textattack/bert-base-uncased-rotten_tomatoes
textattack
2021-05-20T07:47:13Z
7
0
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
## bert-base-uncased fine-tuned with TextAttack on the rotten_tomatoes dataset This `bert-base-uncased` model was fine-tuned for sequence classificationusing TextAttack and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned for 10 epochs with a batch size of 64, a learning rate of 5e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.875234521575985, as measured by the eval set accuracy, found after 4 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/bert-base-uncased-ag-news
textattack
2021-05-20T07:40:21Z
2,911
4
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
## TextAttack Model CardThis `bert-base-uncased` model was fine-tuned for sequence classification using TextAttack and the ag_news dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 3e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.9514473684210526, as measured by the eval set accuracy, found after 3 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/bert-base-uncased-WNLI
textattack
2021-05-20T07:39:22Z
44
1
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
## TextAttack Model Card This `bert-base-uncased` model was fine-tuned for sequence classification using TextAttack and the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 64, a learning rate of 5e-05, and a maximum sequence length of 256. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.5633802816901409, as measured by the eval set accuracy, found after 1 epoch. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/bert-base-uncased-RTE
textattack
2021-05-20T07:36:18Z
81
3
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
## TextAttack Model Card This `bert-base-uncased` model was fine-tuned for sequence classification using TextAttack and the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 8, a learning rate of 2e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.7256317689530686, as measured by the eval set accuracy, found after 2 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
tennessejoyce/titlewave-bert-base-uncased
tennessejoyce
2021-05-20T07:29:09Z
11
0
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "en", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en license: cc-by-4.0 widget: - text: "[Gmail API] How can I extract plain text from an email sent to me?" --- # Titlewave: bert-base-uncased ## Model description Titlewave is a Chrome extension that helps you choose better titles for your Stack Overflow questions. See the [github repository](https://github.com/tennessejoyce/TitleWave) for more information. This is one of two NLP models used in the Titlewave project, and its purpose is to classify whether question will be answered or not just based on the title. The [companion model](https://huggingface.co/tennessejoyce/titlewave-t5-small) suggests a new title based on on the body of the question. ## Intended use Try out different titles for your Stack Overflow post, and see which one gives you the best chance of receiving an answer. You can use the model through the API on this page (hosted by HuggingFace) or install the Chrome extension by following the instructions on the [github repository](https://github.com/tennessejoyce/TitleWave), which integrates the tool directly into the Stack Overflow website. You can also run the model locally in Python like this (which automatically downloads the model to your machine): ```python >>> from transformers import pipeline >>> classifier = pipeline('sentiment-analysis', model='tennessejoyce/titlewave-bert-base-uncased') >>> classifier('[Gmail API] How can I extract plain text from an email sent to me?') [{'label': 'Answered', 'score': 0.8053370714187622}] ``` The 'score' in the output represents the probability of getting an answer with this title: 80.5%. ## Training data The weights were initialized from the [BERT base model](https://huggingface.co/bert-base-uncased), which was trained on BookCorpus and English Wikipedia. Then the model was fine-tuned on the dataset of previous Stack Overflow post titles, which is publicly available [here](https://archive.org/details/stackexchange). Specifically I used three years of posts from 2017-2019, filtered out posts which were closed (e.g., duplicates, off-topic), and selected 5% of the remaining posts at random to use in the training set, and the same amount for validation and test sets (278,155 posts each). ## Training procedure The model was fine-tuned for two epochs with a batch size of 32 (17,384 steps total) using 16-bit mixed precision. After some hyperparameter tuning, I found that the following two-phase training procedure yields the best performance (ROC-AUC score) on the validation set: * In the first epoch, all layers were frozen except for the last two (pooling layer and classification layer) and a learning rate of 3e-4 was used. * In the second epoch all layers were unfrozen, and the learning rate was decreased by a factor of 10 to 3e-5. Otherwise, all parameters we set to the defaults listed [here](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments), including the AdamW optimizer and a linearly decreasing learning schedule (both of which were reset between the two epochs). See the [github repository](https://github.com/tennessejoyce/TitleWave) for the scripts that were used to train the model. ## Evaluation See [this notebook](https://github.com/tennessejoyce/TitleWave/blob/master/model_training/test_classifier.ipynb) for the performance of the title classification model on the test set.
susumu2357/bert-base-swedish-squad2
susumu2357
2021-05-20T07:20:04Z
99
1
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "question-answering", "squad", "sv", "dataset:susumu2357/squad_v2_sv", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: - sv tags: - squad license: apache-2.0 datasets: - susumu2357/squad_v2_sv metrics: - squad_v2 --- # Swedish BERT Fine-tuned on SQuAD v2 This model is a fine-tuning checkpoint of Swedish BERT on SQuAD v2. ## Training data Fine-tuning was done based on the pre-trained model [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased). Training and dev datasets are our [Swedish translation of SQuAD v2](https://github.com/susumu2357/SQuAD_v2_sv). [Here](https://huggingface.co/datasets/susumu2357/squad_v2_sv) is the HuggingFace Datasets. ## Hyperparameters ``` batch_size = 16 n_epochs = 2 max_seq_len = 386 learning_rate = 3e-5 warmup_steps = 2900 # warmup_proportion = 0.2 doc_stride=128 max_query_length=64 ``` ## Eval results ``` 'exact': 66.72642524202223 'f1': 70.11149581003404 'total': 11156 'HasAns_exact': 55.574745730186144 'HasAns_f1': 62.821693965983044 'HasAns_total': 5211 'NoAns_exact': 76.50126156433979 'NoAns_f1': 76.50126156433979 'NoAns_total': 5945 ``` ## Limitations and bias This model may contain biases due to mistranslations of the SQuAD dataset. ## BibTeX entry and citation info ```bibtex @misc{svSQuADbert, author = {Susumu Okazawa}, title = {Swedish BERT Fine-tuned on Swedish SQuAD 2.0}, year = {2021}, howpublished = {\url{https://huggingface.co/susumu2357/bert-base-swedish-squad2}}, } ```