Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
null
null
## SunBERT Sunbert is a variant of bert trained on Ugandan text data for the tasks of ``Covid/Non Covid`` tweet classification as well as classification of Social Media news articles as either ``Organic, Promotional or Editorial`` Information has become more abundant with the internet. Specifically, people communicate in natural language over social media. Machine learning offers a good way to analyze natural language. We utilized methods from deep learning to analyze text from social media. We build models based on deep learning architectures - Bidirectional Encoder Representations from Transformers (BERT) to perform two downstream tasks: 1. Analyze posts from social media as promotional, editorial or Organic and 2. To identify tweets as either covid19 related or not. Both tasks show the ability of machine learning to be used to analyze large data and be used to support decision making. We open source the dataset and source code of our model called SunBERT so that other people can utilize these techniques to their needs. ## Datasets: We use data from Twitter and Facebook. The dataset contained tweets and posts from both social networks collected through CrowdTangle - a tool from facebook to help follow, analyze and report on what’s happening across social media. ## Models: BERT (Bidirectional Encoder Representations from Transformers is a deep learning model published by researchers at Google AI. It presented state of the art performance in different Natural Language Processing tasks including Question Answering, Text Classification and Language Modelling. The key technical innovation is that BERT applies a bidirectional training of the Transformer - a popular Attention-based model to language processing. ## Use Cases: We have shown the application of SunBERT to three use cases, Covid19 classification, News Classification and Language adaptation for Machine Learning research and development. However, SunBERT can be extended to perform other tasks; these include; Question Answering, Masked Language Modelling, Next Sentence Prediction. Our code and datasets can be used as a starting point for any of these tasks, with minor modification to the model architecture.
{}
Sunbird/sunbert
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
English to Luganda text translation
{}
Sunbird/sunbird-en-lg
null
[ "transformers", "pytorch", "marian", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
Sunbird/sunbird-en-mul
null
[ "transformers", "pytorch", "marian", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
Sunbird/sunbird-mul-en
null
[ "transformers", "pytorch", "safetensors", "marian", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
SunnyS2/Sunny
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Sunnydx/BillCipher
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
#Bill cipher chat bot
{"tags": ["conversational"]}
Sunnydx/BillCipherBot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
[SuperAI Engineer Season 2](https://superai.aiat.or.th/) , [Machima](https://machchima.superai.me/) [Google's mT5](https://github.com/google-research/multilingual-t5) , [Pollawat](https://huggingface.co/Pollawat/mt5-small-thai-qg) ```python from transformers import T5Tokenizer, T5ForConditionalGeneration, T5Config model = T5ForConditionalGeneration.from_pretrained('SuperAI2-Machima/mt5-small-thai-qg-v2') tokenizer = T5Tokenizer.from_pretrained('SuperAI2-Machima/mt5-small-thai-qg-v2') source_text = 'บุกยึดไม้เถื่อน อดีต ส.ส.บุรีรัมย์ เตรียมสร้างคฤหาสน์ทรงไทย 1 กันยายน 2550 12:00 น. ตำรวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่า 80 แผ่น' print('Predicted Summary Text : ') tokenized_text = tokenizer.encode(source_text, return_tensors="pt").to(device) summary_ids = model.generate(tokenized_text, num_beams=4, no_repeat_ngram_size=2, max_length=50, early_stopping=True) output = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(output) #Predicted Summary Text : #answer: 80 แผ่น question: ตํารวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่ากี่แผ่น ```
{"language": ["thai", "th"], "license": "mit", "tags": ["question-generation"], "datasets": ["NSC2018", "wiki-documents-nsc", "ThaiQACorpus-DevelopmentDataset"], "widget": [{"text": "\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e1a\u0e49\u0e32\u0e19\u0e02\u0e38\u0e19\u0e14\u0e48\u0e32\u0e19 \u0e15\u0e31\u0e49\u0e07\u0e2d\u0e22\u0e39\u0e48\u0e17\u0e35\u0e48\u0e02\u0e38\u0e19\u0e14\u0e48\u0e32\u0e19 \u0e08.\u0e19\u0e04\u0e23\u0e19\u0e32\u0e22\u0e01 </s>", "example_title": "Example 01"}, {"text": "\u0e1e\u0e25\u0e40\u0e2d\u0e01 \u0e1b\u0e23\u0e30\u0e22\u0e38\u0e17\u0e18\u0e4c \u0e08\u0e31\u0e19\u0e17\u0e23\u0e4c\u0e42\u0e2d\u0e0a\u0e32 (\u0e40\u0e01\u0e34\u0e14 21 \u0e21\u0e35\u0e19\u0e32\u0e04\u0e21 \u0e1e.\u0e28. 2497) \u0e0a\u0e37\u0e48\u0e2d\u0e40\u0e25\u0e48\u0e19 \u0e15\u0e39\u0e48 \u0e40\u0e1b\u0e47\u0e19\u0e19\u0e31\u0e01\u0e01\u0e32\u0e23\u0e40\u0e21\u0e37\u0e2d\u0e07\u0e41\u0e25\u0e30\u0e2d\u0e14\u0e35\u0e15\u0e19\u0e32\u0e22\u0e17\u0e2b\u0e32\u0e23\u0e1a\u0e01\u0e0a\u0e32\u0e27\u0e44\u0e17\u0e22 </s>", "example_title": "Example 02"}, {"text": "\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 1 \u0e01\u0e31\u0e19\u0e22\u0e32\u0e22\u0e19 2550 12:00 \u0e19. \u0e15\u0e33\u0e23\u0e27\u0e08\u0e20\u0e39\u0e18\u0e23\u0e08.\u0e1a\u0e38\u0e23\u0e35\u0e23\u0e31\u0e21\u0e22\u0e4c\u0e1a\u0e38\u0e01\u0e15\u0e23\u0e27\u0e08\u0e22\u0e36\u0e14\u0e44\u0e21\u0e49\u0e41\u0e1b\u0e23\u0e23\u0e39\u0e1b\u0e2b\u0e27\u0e07\u0e2b\u0e49\u0e32\u0e21\u0e01\u0e27\u0e48\u0e32 80 \u0e41\u0e1c\u0e48\u0e19 </s>", "example_title": "Example 03"}, {"text": "\u0e01\u0e23\u0e38\u0e07\u0e40\u0e17\u0e1e\u0e21\u0e2b\u0e32\u0e19\u0e04\u0e23 \u0e40\u0e1b\u0e47\u0e19\u0e28\u0e39\u0e19\u0e22\u0e4c\u0e01\u0e25\u0e32\u0e07\u0e01\u0e32\u0e23\u0e1b\u0e01\u0e04\u0e23\u0e2d\u0e07 \u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32 \u0e01\u0e32\u0e23\u0e04\u0e21\u0e19\u0e32\u0e04\u0e21\u0e02\u0e19\u0e2a\u0e48\u0e07 \u0e01\u0e32\u0e23\u0e40\u0e07\u0e34\u0e19\u0e01\u0e32\u0e23\u0e18\u0e19\u0e32\u0e04\u0e32\u0e23 \u0e01\u0e32\u0e23\u0e1e\u0e32\u0e13\u0e34\u0e0a\u0e22\u0e4c \u0e01\u0e32\u0e23\u0e2a\u0e37\u0e48\u0e2d\u0e2a\u0e32\u0e23 \u0e41\u0e25\u0e30\u0e04\u0e27\u0e32\u0e21\u0e40\u0e08\u0e23\u0e34\u0e0d\u0e02\u0e2d\u0e07\u0e1b\u0e23\u0e30\u0e40\u0e17\u0e28 \u0e15\u0e31\u0e49\u0e07\u0e2d\u0e22\u0e39\u0e48\u0e1a\u0e19\u0e2a\u0e32\u0e21\u0e40\u0e2b\u0e25\u0e35\u0e48\u0e22\u0e21\u0e1b\u0e32\u0e01\u0e41\u0e21\u0e48\u0e19\u0e49\u0e33\u0e40\u0e08\u0e49\u0e32\u0e1e\u0e23\u0e30\u0e22\u0e32 \u0e21\u0e35\u0e41\u0e21\u0e48\u0e19\u0e49\u0e33\u0e40\u0e08\u0e49\u0e32\u0e1e\u0e23\u0e30\u0e22\u0e32\u0e44\u0e2b\u0e25\u0e1c\u0e48\u0e32\u0e19\u0e41\u0e25\u0e30\u0e41\u0e1a\u0e48\u0e07\u0e40\u0e21\u0e37\u0e2d\u0e07\u0e2d\u0e2d\u0e01\u0e40\u0e1b\u0e47\u0e19 2 \u0e1d\u0e31\u0e48\u0e07 \u0e04\u0e37\u0e2d \u0e1d\u0e31\u0e48\u0e07\u0e1e\u0e23\u0e30\u0e19\u0e04\u0e23\u0e41\u0e25\u0e30\u0e1d\u0e31\u0e48\u0e07\u0e18\u0e19\u0e1a\u0e38\u0e23\u0e35 \u0e01\u0e23\u0e38\u0e07\u0e40\u0e17\u0e1e\u0e21\u0e2b\u0e32\u0e19\u0e04\u0e23\u0e21\u0e35\u0e1e\u0e37\u0e49\u0e19\u0e17\u0e35\u0e48\u0e17\u0e31\u0e49\u0e07\u0e2b\u0e21\u0e14 1,568.737 \u0e15\u0e23.\u0e01\u0e21. </s>", "example_title": "Example 04"}]}
SuperAI2-Machima/mt5-small-thai-qg-v2
null
[ "transformers", "pytorch", "mt5", "text2text-generation", "question-generation", "dataset:NSC2018", "dataset:wiki-documents-nsc", "dataset:ThaiQACorpus-DevelopmentDataset", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
[SuperAI Engineer Season 2](https://superai.aiat.or.th/) , [Machima](https://machchima.superai.me/) [Google's mT5](https://github.com/google-research/multilingual-t5) , [Pollawat](https://huggingface.co/Pollawat/mt5-small-thai-qg) ```python from transformers import T5Tokenizer, T5ForConditionalGeneration, T5Config model = T5ForConditionalGeneration.from_pretrained('SuperAI2-Machima/mt5-small-thai-qg') tokenizer = T5Tokenizer.from_pretrained('SuperAI2-Machima/mt5-small-thai-qg') source_text = 'บุกยึดไม้เถื่อน อดีต ส.ส.บุรีรัมย์ เตรียมสร้างคฤหาสน์ทรงไทย 1 กันยายน 2550 12:00 น. ตำรวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่า 80 แผ่น' print('Predicted Summary Text : ') tokenized_text = tokenizer.encode(source_text, return_tensors="pt").to(device) summary_ids = model.generate(tokenized_text, num_beams=4, no_repeat_ngram_size=2, max_length=50, early_stopping=True) output = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(output) #Predicted Summary Text : #answer: 80 แผ่น question: ตํารวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่ากี่แผ่น ```
{"language": ["thai", "th"], "license": "mit", "tags": ["question-generation"], "datasets": ["NSC2018", "wiki-documents-nsc", "ThaiQACorpus-DevelopmentDataset"], "widget": [{"text": "\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e1a\u0e49\u0e32\u0e19\u0e02\u0e38\u0e19\u0e14\u0e48\u0e32\u0e19 \u0e15\u0e31\u0e49\u0e07\u0e2d\u0e22\u0e39\u0e48\u0e17\u0e35\u0e48\u0e02\u0e38\u0e19\u0e14\u0e48\u0e32\u0e19 \u0e08.\u0e19\u0e04\u0e23\u0e19\u0e32\u0e22\u0e01", "example_title": "Example 01"}, {"text": "\u0e1e\u0e25\u0e40\u0e2d\u0e01 \u0e1b\u0e23\u0e30\u0e22\u0e38\u0e17\u0e18\u0e4c \u0e08\u0e31\u0e19\u0e17\u0e23\u0e4c\u0e42\u0e2d\u0e0a\u0e32 (\u0e40\u0e01\u0e34\u0e14 21 \u0e21\u0e35\u0e19\u0e32\u0e04\u0e21 \u0e1e.\u0e28. 2497) \u0e0a\u0e37\u0e48\u0e2d\u0e40\u0e25\u0e48\u0e19 \u0e15\u0e39\u0e48 \u0e40\u0e1b\u0e47\u0e19\u0e19\u0e31\u0e01\u0e01\u0e32\u0e23\u0e40\u0e21\u0e37\u0e2d\u0e07\u0e41\u0e25\u0e30\u0e2d\u0e14\u0e35\u0e15\u0e19\u0e32\u0e22\u0e17\u0e2b\u0e32\u0e23\u0e1a\u0e01\u0e0a\u0e32\u0e27\u0e44\u0e17\u0e22", "example_title": "Example 02"}, {"text": "\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 1 \u0e01\u0e31\u0e19\u0e22\u0e32\u0e22\u0e19 2550 12:00 \u0e19. \u0e15\u0e33\u0e23\u0e27\u0e08\u0e20\u0e39\u0e18\u0e23\u0e08.\u0e1a\u0e38\u0e23\u0e35\u0e23\u0e31\u0e21\u0e22\u0e4c\u0e1a\u0e38\u0e01\u0e15\u0e23\u0e27\u0e08\u0e22\u0e36\u0e14\u0e44\u0e21\u0e49\u0e41\u0e1b\u0e23\u0e23\u0e39\u0e1b\u0e2b\u0e27\u0e07\u0e2b\u0e49\u0e32\u0e21\u0e01\u0e27\u0e48\u0e32 80 \u0e41\u0e1c\u0e48\u0e19", "example_title": "Example 03"}]}
SuperAI2-Machima/mt5-small-thai-qg
null
[ "transformers", "pytorch", "mt5", "text2text-generation", "question-generation", "dataset:NSC2018", "dataset:wiki-documents-nsc", "dataset:ThaiQACorpus-DevelopmentDataset", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
[SuperAI Engineer Season 2](https://superai.aiat.or.th/) , [Machima](https://machchima.superai.me/) [Google's mT5](https://github.com/google-research/multilingual-t5) , [Pollawat](https://huggingface.co/Pollawat/mt5-small-thai-qg) ```python from transformers import T5Tokenizer, T5ForConditionalGeneration, T5Config model = T5ForConditionalGeneration.from_pretrained('SuperAI2-Machima/mt5-small-thai-yes-no-qg') tokenizer = T5Tokenizer.from_pretrained('SuperAI2-Machima/mt5-small-thai-yes-no-qg') source_text = 'บุกยึดไม้เถื่อน อดีต ส.ส.บุรีรัมย์ เตรียมสร้างคฤหาสน์ทรงไทย 1 กันยายน 2550 12:00 น. ตำรวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่า 80 แผ่น' print('Predicted Summary Text : ') tokenized_text = tokenizer.encode(source_text, return_tensors="pt").to(device) summary_ids = model.generate(tokenized_text, num_beams=4, no_repeat_ngram_size=2, max_length=50, early_stopping=True) output = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(output) #Predicted Summary Text : #answer: 80 แผ่น question: ตํารวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่ากี่แผ่น ```
{"language": ["thai", "th"], "license": "mit", "tags": ["Yes No question-generation"], "datasets": ["NSC2018", "wiki-documents-nsc", "ThaiQACorpus-DevelopmentDataset"], "widget": [{"text": "\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 1 \u0e01\u0e31\u0e19\u0e22\u0e32\u0e22\u0e19 2550 12:00 \u0e19. \u0e15\u0e33\u0e23\u0e27\u0e08\u0e20\u0e39\u0e18\u0e23\u0e08.\u0e1a\u0e38\u0e23\u0e35\u0e23\u0e31\u0e21\u0e22\u0e4c\u0e1a\u0e38\u0e01\u0e15\u0e23\u0e27\u0e08\u0e22\u0e36\u0e14\u0e44\u0e21\u0e49\u0e41\u0e1b\u0e23\u0e23\u0e39\u0e1b\u0e2b\u0e27\u0e07\u0e2b\u0e49\u0e32\u0e21\u0e01\u0e27\u0e48\u0e32 80 \u0e41\u0e1c\u0e48\u0e19", "example_title": "Example 01"}, {"text": "\u0e1e\u0e25\u0e40\u0e2d\u0e01 \u0e1b\u0e23\u0e30\u0e22\u0e38\u0e17\u0e18\u0e4c \u0e08\u0e31\u0e19\u0e17\u0e23\u0e4c\u0e42\u0e2d\u0e0a\u0e32 (\u0e40\u0e01\u0e34\u0e14 21 \u0e21\u0e35\u0e19\u0e32\u0e04\u0e21 \u0e1e.\u0e28. 2497) \u0e0a\u0e37\u0e48\u0e2d\u0e40\u0e25\u0e48\u0e19 \u0e15\u0e39\u0e48 \u0e40\u0e1b\u0e47\u0e19\u0e19\u0e31\u0e01\u0e01\u0e32\u0e23\u0e40\u0e21\u0e37\u0e2d\u0e07\u0e41\u0e25\u0e30\u0e2d\u0e14\u0e35\u0e15\u0e19\u0e32\u0e22\u0e17\u0e2b\u0e32\u0e23\u0e1a\u0e01\u0e0a\u0e32\u0e27\u0e44\u0e17\u0e22", "example_title": "Example 02"}]}
SuperAI2-Machima/mt5-small-thai-yes-no-qg
null
[ "transformers", "pytorch", "mt5", "text2text-generation", "Yes No question-generation", "dataset:NSC2018", "dataset:wiki-documents-nsc", "dataset:ThaiQACorpus-DevelopmentDataset", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
{}
SuperAI2-Machima/wangchan-finetune-ner-pos-v3
null
[ "transformers", "pytorch", "camembert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
SuperApril12/TEST
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
SuperApril12/hello
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
SuperDoge/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Superdooperhero/test1
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# FreeIsland AI With the advancement of the graphical processing power of computers and sophisticated algorithms like [Nanite](https://docs.unrealengine.com/5.0/en-US/RenderingFeatures/Nanite/), simulating lifelike sceneries in real-time is never being easier. About a month ago Epic Games [showoff](https://www.youtube.com/watch?v=WU0gvPcc3jQ) the newest capabilities of their newest game engine by simulating an entire city including population, traffic, weather, etc running on a Playstore 5. That made me think what are the things missing from that simulation and how can I use my skills to improve it. One of the main missing components that separate our world and the simulated world is people. More importantly, the interactivity of people in simulated worlds. Last year a game called cyberpunk got released and it had an option to [talk to any person](https://www.youtube.com/watch?v=Z1OtYGzUoSo) in its city but the problem with that was all the responses from the Non-player Characters (NPCs) are hardcoded which greatly reduce the immersion of the game. So the goal of this project is to experiment with how the advancement of Natural Language Processing makes NPCs from video games interactive and enhances immersion in video games. # Usage ```py from transformers import AutoModelForSeq2SeqLM trained_model = AutoModelForSeq2SeqLM.from_pretrained(f"Supiri/t5-base-conversation") prompt = "What's your name?" context = "Hinata was soft-spoken and polite, always addressing people with proper honorifics. She is kind, always thinking of others more than for herself, caring for their feelings and well-being. She doesn't like being confrontational for any reason. This led to her being meek or timid to others, as her overwhelming kindness can render her unable to respond or act for fear of offending somebody." input_ids = tokenizer(f"personality: {context}", f"inquiry: {prompt}", return_tensors='pt').input_ids outputs = trained_model.generate(input_ids, num_beams=6, diversity_penalty=2.5, num_beam_groups=2) print("Answer:\t", tokenizer.decode(outputs[0], skip_special_tokens=True)) # Answer: My name is Hinata ``` # Evaluation ## Test 1 For this test, I sampled input from the test dataset. For this question the actual response is > "It works a little." But models' response was > "I don't want to flirt with you." Which reflect its bio which was filled by GPT-3 > "He stands primarily to gain self-esteem, which he often receives through the submission of others" In gist, Dr. Greenbaum tried to tease Sebastian about his seductive traits but this model's go-to response was to shut her down since the biography of Sebastian states he often tries to assert his dominance over others. ```py prompt = dataset['test'][66]['request'] contexts = dataset['test'][66]['bio'] input_ids = tokenizer(f"personality: {contexts}", f"inquiry: {prompt}", return_tensors='pt').input_ids outputs = trained_model.generate(input_ids, num_beams=6, diversity_penalty=5.0, num_beam_groups=2) print("Input to the Model") print("Bio:\t",contexts) print("\nPrompt:\t", prompt) print("\nGround truth response") print("\t", dataset['test'][66]['response']) print("\nModel's Prediction") print("Answer:\t", tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ```txt Input to the Model Bio: Sebastian is a very extreme representation of the trope of the "Confidence Man", and acts it out to a degree that is sometimes comedic but mostly frightening. He stands primarily to gain self-esteem, which he often receives through the submission of others or solely through his own perceptions. An artful seducer, his incredible charisma is both his greatest weapon and most intoxicating weakness. Prompt: You think you can come in here with that cute little smirk on your face and try and flirt with me. It doesn't work, Sebastian. Ground truth response It works a little. Model's Prediction Answer: I don't want to flirt with you. ``` ### Test 2 Hinata is a kind-hearted girl from the anime series Naruto. I took her bio from [personality database](https://www.personality-database.com/profile/2790/hinata-hyga-naruto-shippden-mbti-personality-type) and ask a few questions about her. Off the top, you can see the model understands the context since when I asked the model, "**What's your name?**" it responded with the name given with the context. Also, notice when prompted the same question differently (**"Who are you?"**), it still manages to answer it well. ```py prompts = ["What's your name?", "How are you feeling?", "Do you like Star Wars?", "Who are you?", "Coffee or tea?"] contexts = "Hinata was soft-spoken and polite, always addressing people with proper honorifics. She is kind, always thinking of others more than for herself, caring for their feelings and well-being. She doesn't like being confrontational for any reason. This led to her being meek or timid to others, as her overwhelming kindness can render her unable to respond or act for fear of offending somebody." print("Bio:\t",contexts, "\n") for prompt in prompts: input_ids = tokenizer(f"personality: {contexts}", f"inquiry: {prompt}", return_tensors='pt').input_ids outputs = trained_model.generate(input_ids, num_beams=6, diversity_penalty=5.0, num_beam_groups=2) print("Prompt:\t", prompt) print("Answer:\t", tokenizer.decode(outputs[0], skip_special_tokens=True), "\n") ``` ```txt Bio: Hinata was soft-spoken and polite, always addressing people with proper honorifics. She is kind, always thinking of others more than for herself, caring for their feelings and well-being. She doesn't like being confrontational for any reason. This led to her being meek or timid to others, as her overwhelming kindness can render her unable to respond or act for fear of offending somebody. Prompt: What's your name? Answer: My name is Hinata Prompt: How are you feeling? Answer: I'm fine. Prompt: Do you like Star Wars? Answer: No, I don't. Prompt: Who are you? Answer: My name is Hinata Prompt: Coffee or tea? Answer: No, I don't drink much. ``` # Conclusion After training the `t5-base` model for 5 epochs, the model started getting adapted to the dataset but there are a lot more improvements that can be done. 1. During the dataset creation part I had to limit the dataset size to 200 unique characters out of 9,035 that's present in the dataset due to the **budget constraints**. So If I manage to cover at least half of the dataset this model will have come up with far better responses. 2. Both input size and batch size were severely constrained due to the lack of access to GPU memory. Having the batch size of 64 is in contrast to 8 would have massive improvements in both training time and **generalization of model**. 3. Using a bigger model like `t5-large` or `t5-3b` will certainly improve the performance. 4. One of the main downsides to using this pre-trained model is this model was trained in German, French, and Romanian. Which consumed a chunk of the **vocabulary size and trainable parameters**. Retraining this model from scratch will help to reduce both needed parameter count and training loss when it comes to this specific task.
{"language": "en", "license": "gpl-3.0", "tags": ["NLP", "ChatBot", "Game AI"], "datasets": ["cornell_movie_dialog"], "metrics": ["rouge"], "widget": [{"text": "personality: Hinata was soft-spoken and polite, always addressing people with proper honorifics. She is kind, always thinking of others more than for herself, caring for their feelings and well-being. She doesn't like being confrontational for any reason. This led to her being meek or timid to others, as her overwhelming kindness can render her unable to respond or act for fear of offending somebody.</s> inquiry: What's your name?", "example_title": "Talk to Hinata"}, {"text": "personality: Voldemort is a raging psychopath, devoid of the normal human responses to other people's suffering. He has no conscience, feels no remorse or empathy, and does not recognize the worth and humanity of anybody except himself.</s> inquiry: What's your name?", "example_title": "Talk to Voldemort"}], "inference": {"parameters": {"num_beams": 6, "diversity_penalty": 2.5, "num_beam_groups": 2}}}
Supiri/t5-base-conversation
null
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "NLP", "ChatBot", "Game AI", "en", "dataset:cornell_movie_dialog", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
SupriyaArun/bert-base-uncased-finetuned-squad-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-squad This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.0755 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.0698 | 1.0 | 5533 | 1.0240 | | 0.7813 | 2.0 | 11066 | 1.0310 | | 0.608 | 3.0 | 16599 | 1.0755 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-finetuned-squad", "results": []}]}
SupriyaArun/bert-base-uncased-finetuned-squad
null
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
SupriyaArun/distilbert-base-uncased-finetuned-squad-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1569 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2213 | 1.0 | 5533 | 1.1560 | | 0.943 | 2.0 | 11066 | 1.1227 | | 0.7633 | 3.0 | 16599 | 1.1569 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]}
SupriyaArun/distilbert-base-uncased-finetuned-squad
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # squeezebert-uncased-finetuned-squad-finetuned-squad This model is a fine-tuned version of [SupriyaArun/squeezebert-uncased-finetuned-squad](https://huggingface.co/SupriyaArun/squeezebert-uncased-finetuned-squad) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "squeezebert-uncased-finetuned-squad-finetuned-squad", "results": []}]}
SupriyaArun/squeezebert-uncased-finetuned-squad-finetuned-squad
null
[ "transformers", "pytorch", "squeezebert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # squeezebert-uncased-finetuned-squad This model is a fine-tuned version of [squeezebert/squeezebert-uncased](https://huggingface.co/squeezebert/squeezebert-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.0808 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2624 | 1.0 | 5533 | 1.1648 | | 1.0699 | 2.0 | 11066 | 1.0920 | | 0.9463 | 3.0 | 16599 | 1.0808 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "squeezebert-uncased-finetuned-squad", "results": []}]}
SupriyaArun/squeezebert-uncased-finetuned-squad
null
[ "transformers", "pytorch", "tensorboard", "squeezebert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Surabhi/sdr-conversion
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Surabhi/sentiment
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Surenis/distilbert-base-uncased-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# BLEURT Pretrained model on English language. It was introduced in [this paper](https://arxiv.org/pdf/2004.04696.pdf), described in [this blogpost](https://ai.googleblog.com/2020/05/evaluating-natural-language-generation.html) and first released in [this repository](https://github.com/google-research/bleurt). The team releasing BLEURT did not write a model card for this model so this model card has been written by the Surfer team. Original TensorFlow implementation has been converted to PyTorch with help of [this article](https://medium.com/huggingface/from-tensorflow-to-pytorch-265f40ef2a28) by Surfer team. Visit us at [surferseo.com](https://surferseo.com). ### How to use Since BLEURT is not implemented in transformers library yet, you have to import BleurtModel from bleurt_model.py ```python import torch from bleurt_model import BleurtModel from transformers import BertTokenizerFast model = BleurtModel.from_pretrained("SurferSEO/bleurt") tokenizer = BertTokenizerFast.from_pretrained("SurferSEO/bleurt") sentence_pairs = [("I love surfing.", "I'd like to surf.")] encoded = tokenizer(sentence_pairs, padding=True, truncation=True, return_tensors="pt") input_ids, attention_mask, token_type_ids = ( encoded["input_ids"], encoded["attention_mask"], encoded["token_type_ids"], ) with torch.set_grad_enabled(False): predictions = model( input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, ) print(predictions) ```
{"language": "en", "license": "apache-2.0"}
Surfer/bleurt
null
[ "transformers", "pytorch", "bert", "en", "arxiv:2004.04696", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
SurrealEverything/nmt_transformer_align
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
SuryA708/Eye-lid
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
Suva/uptag-email-model-v2
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
## Usage: ```python abstract = """We describe a system called Overton, whose main design goal is to support engineers in building, monitoring, and improving production machine learning systems. Key challenges engineers face are monitoring fine-grained quality, diagnosing errors in sophisticated applications, and handling contradictory or incomplete supervision data. Overton automates the life cycle of model construction, deployment, and monitoring by providing a set of novel high-level, declarative abstractions. Overton's vision is to shift developers to these higher-level tasks instead of lower-level machine learning tasks. In fact, using Overton, engineers can build deep-learning-based applications without writing any code in frameworks like TensorFlow. For over a year, Overton has been used in production to support multiple applications in both near-real-time applications and back-of-house processing. In that time, Overton-based applications have answered billions of queries in multiple languages and processed trillions of records reducing errors 1.7-2.9 times versus production systems. """ ``` ### Using Transformers🤗 ```python model_name = "Suva/uptag-url-model" from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) input_ids = tokenizer.encode("summarize: " + abstract, return_tensors="pt", add_special_tokens=True) generated_ids = model.generate(input_ids=input_ids,num_beams=5,max_length=100,repetition_penalty=2.5,length_penalty=1,early_stopping=True,num_return_sequences=3) preds = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True) for g in generated_ids] print(preds) # output ["Overton: Building, Deploying, and Monitoring Machine Learning Systems for Engineers", "Overton: A System for Building, Monitoring, and Improving Production Machine Learning Systems", "Overton: Building, Monitoring, and Improving Production Machine Learning Systems"] ```
{"license": "mit", "datasets": ["arxiv"], "widget": [{"text": "summarize: We describe a system called Overton, whose main design goal is to support engineers in building, monitoring, and improving production machinelearning systems. Key challenges engineers face are monitoring fine-grained quality, diagnosing errors in sophisticated applications, and handling contradictory or incomplete supervision data. Overton automates the life cycle of model construction, deployment, and monitoring by providing a set of novel high-level, declarative abstractions. Overton's vision is to shift developers to these higher-level tasks instead of lower-level machine learning tasks. In fact, using Overton, engineers can build deep-learning-based applications without writing any code in frameworks like TensorFlow. For over a year, Overton has been used in production to support multiple applications in both near-real-time applications and back-of-house processing. In that time, Overton-based applications have answered billions of queries in multiple languages and processed trillions of records reducing errors 1.7-2.9 times versus production systems."}]}
Suva/uptag-url-model
null
[ "transformers", "pytorch", "t5", "text2text-generation", "dataset:arxiv", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Suva/uptag-url-model2
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
image-classification
transformers
# new-york-tokyo-london Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### London ![London](images/London.jpg) #### New York ![New York](images/New_York.jpg) #### Tokyo ![Tokyo](images/Tokyo.jpg)
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
Suzana/new-york-tokyo-london
null
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
SvPolina/t5-small-finetuned-CANARD
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{}
SvyatoslavA/model_awara_text
null
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
SvyatoslavA/model_awara_text_classification
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Swati/distilbert-base-uncased-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Swindsea/Swindsea
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Sxftx/Jungkook
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
# bert-german-dbmdz-uncased-sentence-stsb **This model is outdated!** The new [T-Systems-onsite/cross-en-de-roberta-sentence-transformer](https://huggingface.co/T-Systems-onsite/cross-en-de-roberta-sentence-transformer) model is better for German language. It is also the current best model for English language and works cross-lingually. Please consider using that model.
{"language": "de", "license": "mit"}
T-Systems-onsite/bert-german-dbmdz-uncased-sentence-stsb
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "feature-extraction", "de", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{"language": ["de", "es"], "license": "mit", "tags": ["sentence_embedding"]}
T-Systems-onsite/cross-de-es-roberta-sentence-transformer
null
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence_embedding", "de", "es", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
# Cross German & French RoBERTa for Sentence Embeddings
{"language": ["fr", "de", "multilingual"], "license": "mit", "tags": ["sentence_embedding", "search", "pytorch", "xlm-roberta", "roberta", "xlm-r-distilroberta-base-paraphrase-v1"], "datasets": ["stsb_multi_mt"], "metrics": ["Spearman\u2019s rank correlation", "cosine similarity"]}
T-Systems-onsite/cross-de-fr-roberta-sentence-transformer
null
[ "transformers", "pytorch", "tf", "safetensors", "xlm-roberta", "feature-extraction", "sentence_embedding", "search", "roberta", "xlm-r-distilroberta-base-paraphrase-v1", "fr", "de", "multilingual", "dataset:stsb_multi_mt", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{"language": ["de", "it"], "license": "mit", "tags": ["sentence_embedding"]}
T-Systems-onsite/cross-de-it-roberta-sentence-transformer
null
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "feature-extraction", "sentence_embedding", "de", "it", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{"language": ["nl", "de"], "license": "mit", "tags": ["sentence_embedding"]}
T-Systems-onsite/cross-de-nl-roberta-sentence-transformer
null
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence_embedding", "nl", "de", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{"language": ["de", "pl"], "license": "mit", "tags": ["sentence_embedding"]}
T-Systems-onsite/cross-de-pl-roberta-sentence-transformer
null
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence_embedding", "de", "pl", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{"language": ["de", "pt"], "license": "mit", "tags": ["sentence_embedding"]}
T-Systems-onsite/cross-de-pt-roberta-sentence-transformer
null
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence_embedding", "de", "pt", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{"language": ["de", "ru"], "license": "mit", "tags": ["sentence_embedding"]}
T-Systems-onsite/cross-de-ru-roberta-sentence-transformer
null
[ "transformers", "pytorch", "tf", "safetensors", "xlm-roberta", "feature-extraction", "sentence_embedding", "de", "ru", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{"language": ["de", "zh"], "license": "mit", "tags": ["sentence_embedding"]}
T-Systems-onsite/cross-de-zh-roberta-sentence-transformer
null
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "feature-extraction", "sentence_embedding", "de", "zh", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{"language": ["en", "de", "es"], "license": "mit", "tags": ["sentence_embedding"]}
T-Systems-onsite/cross-en-de-es-roberta-sentence-transformer
null
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence_embedding", "en", "de", "es", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{"language": ["en", "de", "fr"], "license": "mit", "tags": ["sentence_embedding"]}
T-Systems-onsite/cross-en-de-fr-roberta-sentence-transformer
null
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "feature-extraction", "sentence_embedding", "en", "de", "fr", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{"language": ["en", "de", "it"], "license": "mit", "tags": ["sentence_embedding"]}
T-Systems-onsite/cross-en-de-it-roberta-sentence-transformer
null
[ "transformers", "pytorch", "tf", "xlm-roberta", "feature-extraction", "sentence_embedding", "en", "de", "it", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{"language": ["en", "de", "nl"], "license": "mit", "tags": ["sentence_embedding"]}
T-Systems-onsite/cross-en-de-nl-roberta-sentence-transformer
null
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence_embedding", "en", "de", "nl", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{"language": ["en", "de", "pl"], "license": "mit", "tags": ["sentence_embedding"]}
T-Systems-onsite/cross-en-de-pl-roberta-sentence-transformer
null
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "feature-extraction", "sentence_embedding", "en", "de", "pl", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{"language": ["en", "de", "pt"], "license": "mit", "tags": ["sentence_embedding"]}
T-Systems-onsite/cross-en-de-pt-roberta-sentence-transformer
null
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "feature-extraction", "sentence_embedding", "en", "de", "pt", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
# Cross English & German RoBERTa for Sentence Embeddings This model is intended to [compute sentence (text) embeddings](https://www.sbert.net/examples/applications/computing-embeddings/README.html) for English and German text. These embeddings can then be compared with [cosine-similarity](https://en.wikipedia.org/wiki/Cosine_similarity) to find sentences with a similar semantic meaning. For example this can be useful for [semantic textual similarity](https://www.sbert.net/docs/usage/semantic_textual_similarity.html), [semantic search](https://www.sbert.net/docs/usage/semantic_search.html), or [paraphrase mining](https://www.sbert.net/docs/usage/paraphrase_mining.html). To do this you have to use the [Sentence Transformers Python framework](https://github.com/UKPLab/sentence-transformers). The speciality of this model is that it also works cross-lingually. Regardless of the language, the sentences are translated into very similar vectors according to their semantics. This means that you can, for example, enter a search in German and find results according to the semantics in German and also in English. Using a xlm model and _multilingual finetuning with language-crossing_ we reach performance that even exceeds the best current dedicated English large model (see Evaluation section below). > Sentence-BERT (SBERT) is a modification of the pretrained BERT network that use siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. This reduces the effort for finding the most similar pair from 65hours with BERT / RoBERTa to about 5 seconds with SBERT, while maintaining the accuracy from BERT. Source: [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084) This model is fine-tuned from [Philip May](https://may.la/) and open-sourced by [T-Systems-onsite](https://www.t-systems-onsite.de/). Special thanks to [Nils Reimers](https://www.nils-reimers.de/) for your awesome open-source work, the Sentence Transformers, the models and your help on GitHub. ## How to use To use this model install the `sentence-transformers` package (see here: <https://github.com/UKPLab/sentence-transformers>). ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('T-Systems-onsite/cross-en-de-roberta-sentence-transformer') ``` For details of usage and examples see here: - [Computing Sentence Embeddings](https://www.sbert.net/docs/usage/computing_sentence_embeddings.html) - [Semantic Textual Similarity](https://www.sbert.net/docs/usage/semantic_textual_similarity.html) - [Paraphrase Mining](https://www.sbert.net/docs/usage/paraphrase_mining.html) - [Semantic Search](https://www.sbert.net/docs/usage/semantic_search.html) - [Cross-Encoders](https://www.sbert.net/docs/usage/cross-encoder.html) - [Examples on GitHub](https://github.com/UKPLab/sentence-transformers/tree/master/examples) ## Training The base model is [xlm-roberta-base](https://huggingface.co/xlm-roberta-base). This model has been further trained by [Nils Reimers](https://www.nils-reimers.de/) on a large scale paraphrase dataset for 50+ languages. [Nils Reimers](https://www.nils-reimers.de/) about this [on GitHub](https://github.com/UKPLab/sentence-transformers/issues/509#issuecomment-712243280): >A paper is upcoming for the paraphrase models. > >These models were trained on various datasets with Millions of examples for paraphrases, mainly derived from Wikipedia edit logs, paraphrases mined from Wikipedia and SimpleWiki, paraphrases from news reports, AllNLI-entailment pairs with in-batch-negative loss etc. > >In internal tests, they perform much better than the NLI+STSb models as they have see more and broader type of training data. NLI+STSb has the issue that they are rather narrow in their domain and do not contain any domain specific words / sentences (like from chemistry, computer science, math etc.). The paraphrase models has seen plenty of sentences from various domains. > >More details with the setup, all the datasets, and a wider evaluation will follow soon. The resulting model called `xlm-r-distilroberta-base-paraphrase-v1` has been released here: <https://github.com/UKPLab/sentence-transformers/releases/tag/v0.3.8> Building on this cross language model we fine-tuned it for English and German language on the [STSbenchmark](http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark) dataset. For German language we used the dataset of our [German STSbenchmark dataset](https://github.com/t-systems-on-site-services-gmbh/german-STSbenchmark) which has been translated with [deepl.com](https://www.deepl.com/translator). Additionally to the German and English training samples we generated samples of English and German crossed. We call this _multilingual finetuning with language-crossing_. It doubled the traing-datasize and tests show that it further improves performance. We did an automatic hyperparameter search for 33 trials with [Optuna](https://github.com/optuna/optuna). Using 10-fold crossvalidation on the deepl.com test and dev dataset we found the following best hyperparameters: - batch_size = 8 - num_epochs = 2 - lr = 1.026343323298136e-05, - eps = 4.462251033010287e-06 - weight_decay = 0.04794438776350409 - warmup_steps_proportion = 0.1609010732760181 The final model was trained with these hyperparameters on the combination of the train and dev datasets from English, German and the crossings of them. The testset was left for testing. # Evaluation The evaluation has been done on English, German and both languages crossed with the STSbenchmark test data. The evaluation-code is available on [Colab](https://colab.research.google.com/drive/1gtGnKq_dYU_sDYqMohTYVMVpxMJjyH0M?usp=sharing). As the metric for evaluation we use the Spearman’s rank correlation between the cosine-similarity of the sentence embeddings and STSbenchmark labels. | Model Name | Spearman<br/>German | Spearman<br/>English | Spearman<br/>EN-DE & DE-EN<br/>(cross) | |---------------------------------------------------------------|-------------------|--------------------|------------------| | xlm-r-distilroberta-base-paraphrase-v1 | 0.8079 | 0.8350 | 0.7983 | | [xlm-r-100langs-bert-base-nli-stsb-mean-tokens](https://huggingface.co/sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens) | 0.7877 | 0.8465 | 0.7908 | | xlm-r-bert-base-nli-stsb-mean-tokens | 0.7877 | 0.8465 | 0.7908 | | [roberta-large-nli-stsb-mean-tokens](https://huggingface.co/sentence-transformers/roberta-large-nli-stsb-mean-tokens) | 0.6371 | 0.8639 | 0.4109 | | [T-Systems-onsite/<br/>german-roberta-sentence-transformer-v2](https://huggingface.co/T-Systems-onsite/german-roberta-sentence-transformer-v2) | 0.8529 | 0.8634 | 0.8415 | | [paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) | 0.8355 | **0.8682** | 0.8309 | | **T-Systems-onsite/<br/>cross-en-de-roberta-sentence-transformer** | **0.8550** | 0.8660 | **0.8525** | ## License Copyright (c) 2020 Philip May, T-Systems on site services GmbH Licensed under the MIT License (the "License"); you may not use this work except in compliance with the License. You may obtain a copy of the License by reviewing the file [LICENSE](https://huggingface.co/T-Systems-onsite/cross-en-de-roberta-sentence-transformer/blob/main/LICENSE) in the repository.
{"language": ["de", "en", "multilingual"], "license": "mit", "tags": ["sentence_embedding", "search", "pytorch", "xlm-roberta", "roberta", "xlm-r-distilroberta-base-paraphrase-v1", "paraphrase"], "datasets": ["stsb_multi_mt"], "metrics": ["Spearman\u2019s rank correlation", "cosine similarity"]}
T-Systems-onsite/cross-en-de-roberta-sentence-transformer
null
[ "transformers", "pytorch", "tf", "safetensors", "xlm-roberta", "feature-extraction", "sentence_embedding", "search", "roberta", "xlm-r-distilroberta-base-paraphrase-v1", "paraphrase", "de", "en", "multilingual", "dataset:stsb_multi_mt", "arxiv:1908.10084", "license:mit", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{"language": ["en", "de", "ru"], "license": "mit", "tags": ["sentence_embedding"]}
T-Systems-onsite/cross-en-de-ru-roberta-sentence-transformer
null
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "feature-extraction", "sentence_embedding", "en", "de", "ru", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{"language": ["en", "es", "pt"], "license": "mit", "tags": ["sentence_embedding"], "datasets": ["stsb_multi_mt"]}
T-Systems-onsite/cross-en-es-pt-roberta-sentence-transformer
null
[ "transformers", "pytorch", "tf", "safetensors", "xlm-roberta", "feature-extraction", "sentence_embedding", "en", "es", "pt", "dataset:stsb_multi_mt", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{"language": ["en", "es"], "license": "mit", "tags": ["sentence_embedding"]}
T-Systems-onsite/cross-en-es-roberta-sentence-transformer
null
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "feature-extraction", "sentence_embedding", "en", "es", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{"language": ["en", "fr", "it"], "license": "mit", "tags": ["sentence_embedding"]}
T-Systems-onsite/cross-en-fr-it-roberta-sentence-transformer
null
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "feature-extraction", "sentence_embedding", "en", "fr", "it", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
# Cross English & French RoBERTa for Sentence Embeddings
{"language": ["fr", "en", "multilingual"], "license": "mit", "tags": ["sentence_embedding", "search", "pytorch", "xlm-roberta", "roberta", "xlm-r-distilroberta-base-paraphrase-v1"], "datasets": ["stsb_multi_mt"], "metrics": ["Spearman\u2019s rank correlation", "cosine similarity"]}
T-Systems-onsite/cross-en-fr-roberta-sentence-transformer
null
[ "transformers", "pytorch", "tf", "safetensors", "xlm-roberta", "feature-extraction", "sentence_embedding", "search", "roberta", "xlm-r-distilroberta-base-paraphrase-v1", "fr", "en", "multilingual", "dataset:stsb_multi_mt", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{"language": ["en", "it"], "license": "mit", "tags": ["sentence_embedding"]}
T-Systems-onsite/cross-en-it-roberta-sentence-transformer
null
[ "transformers", "pytorch", "tf", "safetensors", "xlm-roberta", "feature-extraction", "sentence_embedding", "en", "it", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{"language": ["en", "nl", "fr"], "license": "mit", "tags": ["sentence_embedding"]}
T-Systems-onsite/cross-en-nl-fr-roberta-sentence-transformer
null
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence_embedding", "en", "nl", "fr", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{"language": ["en", "nl", "it"], "license": "mit", "tags": ["sentence_embedding"]}
T-Systems-onsite/cross-en-nl-it-roberta-sentence-transformer
null
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence_embedding", "en", "nl", "it", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{"language": ["en", "nl"], "license": "mit", "tags": ["sentence_embedding"]}
T-Systems-onsite/cross-en-nl-roberta-sentence-transformer
null
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence_embedding", "en", "nl", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{"language": ["en", "pl", "it"], "license": "mit", "tags": ["sentence_embedding"]}
T-Systems-onsite/cross-en-pl-it-roberta-sentence-transformer
null
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence_embedding", "en", "pl", "it", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{"language": ["en", "pl"], "license": "mit", "tags": ["sentence_embedding"]}
T-Systems-onsite/cross-en-pl-roberta-sentence-transformer
null
[ "transformers", "pytorch", "tf", "safetensors", "xlm-roberta", "feature-extraction", "sentence_embedding", "en", "pl", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{"language": ["en", "pt"], "license": "mit", "tags": ["sentence_embedding"]}
T-Systems-onsite/cross-en-pt-roberta-sentence-transformer
null
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "feature-extraction", "sentence_embedding", "en", "pt", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{"language": ["en", "ru"], "license": "mit", "tags": ["sentence_embedding"]}
T-Systems-onsite/cross-en-ru-roberta-sentence-transformer
null
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence_embedding", "en", "ru", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{"language": ["en", "zh"], "license": "mit", "tags": ["sentence_embedding"]}
T-Systems-onsite/cross-en-zh-roberta-sentence-transformer
null
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "feature-extraction", "sentence_embedding", "en", "zh", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
# German RoBERTa for Sentence Embeddings V2 **The new [T-Systems-onsite/cross-en-de-roberta-sentence-transformer](https://huggingface.co/T-Systems-onsite/cross-en-de-roberta-sentence-transformer) model is slightly better for German language. It is also the current best model for English language and works cross-lingually. Please consider using that model.**
{"language": "de", "license": "mit", "tags": ["sentence_embedding", "search", "pytorch", "xlm-roberta", "roberta", "xlm-r-distilroberta-base-paraphrase-v1", "paraphrase"], "datasets": ["STSbenchmark"], "metrics": ["Spearman\u2019s rank correlation", "cosine similarity"]}
T-Systems-onsite/german-roberta-sentence-transformer-v2
null
[ "transformers", "pytorch", "tf", "safetensors", "xlm-roberta", "feature-extraction", "sentence_embedding", "search", "roberta", "xlm-r-distilroberta-base-paraphrase-v1", "paraphrase", "de", "dataset:STSbenchmark", "license:mit", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
summarization
transformers
# mT5-small-sum-de-en-v2 This is a bilingual summarization model for English and German. It is based on the multilingual T5 model [google/mt5-small](https://huggingface.co/google/mt5-small). ## Training The training was conducted with the following hyperparameters: - base model: [google/mt5-small](https://huggingface.co/google/mt5-small) - source_prefix: `"summarize: "` - batch size: 3 - max_source_length: 800 - max_target_length: 96 - warmup_ratio: 0.3 - number of train epochs: 10 - gradient accumulation steps: 2 - learning rate: 5e-5 ## Datasets and Preprocessing The datasets were preprocessed as follows: The summary was tokenized with the [google/mt5-small](https://huggingface.co/google/mt5-small) tokenizer. Then only the records with no more than 94 summary tokens were selected. The MLSUM dataset has a special characteristic. In the text, the summary is often included completely as one or more sentences. These have been removed from the texts. The reason is that we do not want to train a model that ultimately extracts only sentences as a summary. This model is trained on the following datasets: | Name | Language | License |------|----------|-------- | [CNN Daily - Train](https://github.com/abisee/cnn-dailymail) | en | The license is unclear. The data comes from CNN and Daily Mail. We assume that it may only be used for research purposes and not commercially. | [Extreme Summarization (XSum) - Train](https://github.com/EdinburghNLP/XSum) | en | The license is unclear. The data comes from BBC. We assume that it may only be used for research purposes and not commercially. | [MLSUM German - Train](https://github.com/ThomasScialom/MLSUM) | de | Usage of dataset is restricted to non-commercial research purposes only. Copyright belongs to the original copyright holders (see [here](https://github.com/ThomasScialom/MLSUM#mlsum)). | [SwissText 2019 - Train](https://www.swisstext.org/2019/shared-task/german-text-summarization-challenge.html) | de | The license is unclear. The data was published in the [German Text Summarization Challenge](https://www.swisstext.org/2019/shared-task/german-text-summarization-challenge.html). We assume that they may be used for research purposes and not commercially. | Language | Size |------|------ | German | 302,607 | English | 422,228 | Total | 724,835 ## Evaluation on MLSUM German Test Set (no beams) | Model | rouge1 | rouge2 | rougeL | rougeLsum |-------|--------|--------|--------|---------- | [ml6team/mt5-small-german-finetune-mlsum](https://huggingface.co/ml6team/mt5-small-german-finetune-mlsum) | 18.3607 | 5.3604 | 14.5456 | 16.1946 | [deutsche-telekom/mT5-small-sum-de-en-01](https://huggingface.co/deutsche-telekom/mt5-small-sum-de-en-v1) | 21.7336 | 7.2614 | 17.1323 | 19.3977 | **T-Systems-onsite/mt5-small-sum-de-en-v2 (this)** | **21.7756** | **7.2662** | **17.1444** | **19.4242** ## Evaluation on CNN Daily English Test Set (no beams) | Model | rouge1 | rouge2 | rougeL | rougeLsum |-------|--------|--------|--------|---------- | [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) | 26.7664 | 8.8243 | 18.3703 | 23.2614 | [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) | 28.5374 | 9.8565 | 19.4829 | 24.7364 | [mrm8488/t5-base-finetuned-summarize-news](https://huggingface.co/mrm8488/t5-base-finetuned-summarize-news) | 37.576 | 14.7389 | 24.0254 | 34.4634 | [deutsche-telekom/mT5-small-sum-de-en-01](https://huggingface.co/deutsche-telekom/mt5-small-sum-de-en-v1) | 37.6339 | 16.5317 | 27.1418 | 34.9951 | **T-Systems-onsite/mt5-small-sum-de-en-v2 (this)** | **37.8096** | **16.6646** | **27.2239** | **35.1916** ## Evaluation on Extreme Summarization (XSum) English Test Set (no beams) | Model | rouge1 | rouge2 | rougeL | rougeLsum |-------|--------|--------|--------|---------- | [mrm8488/t5-base-finetuned-summarize-news](https://huggingface.co/mrm8488/t5-base-finetuned-summarize-news) | 18.6204 | 3.535 | 12.3997 | 15.2111 | [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) | 28.5374 | 9.8565 | 19.4829 | 24.7364 | [deutsche-telekom/mT5-small-sum-de-en-01](https://huggingface.co/deutsche-telekom/mt5-small-sum-de-en-v1) | 32.3416 | 10.6191 | 25.3799 | 25.3908 | T-Systems-onsite/mt5-small-sum-de-en-v2 (this) | 32.4828 | 10.7004| 25.5238 | 25.5369 | [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) | 44.2553 &clubs; | 21.4289 &clubs; | 36.2639 &clubs; | 36.2696 &clubs; &clubs;: These values seem to be unusually high. It could be that the test set was used in the training data. ## License Copyright (c) 2021 Philip May, T-Systems on site services GmbH This work is licensed under the [Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0)](https://creativecommons.org/licenses/by-nc-sa/3.0/) license.
{"language": ["de", "en", "multilingual"], "license": "cc-by-nc-sa-4.0", "tags": ["summarization"], "datasets": ["cnn_dailymail", "xsum", "mlsum", "swiss_text_2019"]}
T-Systems-onsite/mt5-small-sum-de-en-v2
null
[ "transformers", "pytorch", "safetensors", "mt5", "text2text-generation", "summarization", "de", "en", "multilingual", "dataset:cnn_dailymail", "dataset:xsum", "dataset:mlsum", "dataset:swiss_text_2019", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
T1Berger/bert-base-cased-goemotions-emotion5
null
[ "transformers", "pytorch", "bert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
TAHAhdvd/Yes
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
TAOC0002/bert
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# mGPT mGPT is pre-trained on the [mC4 dataset](https://huggingface.co/datasets/mc4) using a causal language modeling objective. It was introduced in this [paper](https://arxiv.org/abs/2110.06609) and first released on this page. ## Model description mGPT is a Transformer-based model which pre-trained on massive multilingual data covering over 101 languages. Similar to GPT-2, It was pre-trained on the raw texts only, with no human labeling. We use the same tokenization and vocabulary as the [mT5 model](https://huggingface.co/google/mt5-base). ## Intended uses You can use the raw model for text generation or using prompts for adapting it to a downstream task. ## How to use You can use this model directly with a pipeline for text generation. Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import MT5Tokenizer, GPT2LMHeadModel, TextGenerationPipeline tokenizer = MT5Tokenizer.from_pretrained("THUMT/mGPT") model = GPT2LMHeadModel.from_pretrained("THUMT/mGPT") pipeline = TextGenerationPipeline(model=model, tokenizer=tokenizer) text = "Replace me by any text you'd like." text = pipeline(text, do_sample=True, max_length=1024)[0]["generated_text"] ``` ## Preprocessing The texts are tokenized using `sentencepiece` and a vocabulary size of 250,100. The inputs are sequences of 1,024 consecutive tokens. We use `<extra_id_0>` to separate lines in a document. ## BibTeX entry and citation info ```bibtex @misc{tan2021msp, title={MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better Translators}, author={Zhixing Tan and Xiangwen Zhang and Shuo Wang and Yang Liu}, year={2021}, eprint={2110.06609}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{}
THUMT/mGPT
null
[ "transformers", "pytorch", "gpt2", "text-generation", "arxiv:2110.06609", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
TIgb/girlfriend_uwu
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
TJ001/TJ
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# iSEEEK A universal approach for integrating super large-scale single-cell transcriptomes by exploring gene rankings ## An simple pipeline for single-cell analysis ```python import torch import gzip import re from tqdm import tqdm import numpy as np import scanpy as sc from torch.utils.data import DataLoader, Dataset from transformers import PreTrainedTokenizerFast, BertForMaskedLM class LineDataset(Dataset): def __init__(self, lines): self.lines = lines self.regex = re.compile(r'\-|\.') def __getitem__(self, i): return self.regex.sub('_', self.lines[i]) def __len__(self): return len(self.lines) device = "cuda" if torch.cuda.is_available() else "cpu" torch.set_num_threads(2) tokenizer = PreTrainedTokenizerFast.from_pretrained("TJMUCH/transcriptome-iseeek") model = BertForMaskedLM.from_pretrained("TJMUCH/transcriptome-iseeek").bert model = model.to(device) model.eval() ## Data desposited in https://huggingface.co/TJMUCH/transcriptome-iseeek/tree/main lines = [s.strip().decode() for s in gzip.open("pbmc_ranking.txt.gz")] labels = [s.strip().decode() for s in gzip.open("pbmc_label.txt.gz")] labels = np.asarray(labels) ds = LineDataset(lines) dl = DataLoader(ds, batch_size=80) features = [] for a in tqdm(dl, total=len(dl)): batch = tokenizer(a, max_length=128, truncation=True, padding=True, return_tensors="pt") for k, v in batch.items(): batch[k] = v.to(device) with torch.no_grad(): out = model(**batch) f = out.last_hidden_state[:,0,:] features.extend(f.tolist()) features = np.stack(features) adata = sc.AnnData(features) adata.obs['celltype'] = labels adata.obs.celltype = adata.obs.celltype.astype("category") sc.pp.neighbors(adata, use_rep='X') sc.tl.umap(adata) sc.tl.leiden(adata) sc.pl.umap(adata, color=['celltype','leiden'],save= "UMAP") ``` ## Extract token representations ```python cell_counts = len(lines) x = np.zeros((cell_counts, len(tokenizer)), dtype=np.float16) for a in tqdm(dl, total=len(dl)): batch = tokenizer(a, max_length=128, truncation=True, padding=True, return_tensors="pt") for k, v in batch.items(): batch[k] = v.to(device) with torch.no_grad(): out = model(**batch) eos_idxs = batch.attention_mask.sum(dim=1) - 1 f = out.last_hidden_state batch_size = f.shape[0] input_ids = batch.input_ids for i in range(batch_size): ##genes = tokenizer.batch_decode(input_ids[i]) token_norms = [f[i][j].norm().item() for j in range(1, eos_idxs[i])] idxs = input_ids[i].tolist()[1:eos_idxs[i]] x[counter, idxs] = token_norms counter = counter + 1 ```
{}
TJMUCH/transcriptome-iseeek
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
TODBERT/TOD-BERT-JNT-V1
null
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
TODBERT/TOD-BERT-MLM-V1
null
[ "transformers", "pytorch", "tf", "jax", "bert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
TODBERT/TOD-DistilBERT-JNT-V1
null
[ "transformers", "pytorch", "distilbert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
# MASC The final output model is: `model.pb` The language model can be found at: https://huggingface.co/TRoboto/masc_kenlm_3grams_lm To run the model, clone this repo and the language model repo, then follow the instructions here: https://deepspeech.readthedocs.io/en/master/USING.html To use the checkpoint to retrain the model, clone this repo and follow the instructions here: https://deepspeech.readthedocs.io/en/r0.9/TRAINING.html
{}
TRoboto/masc_deepspeech_asr_model_v0
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
# MASC The scorer model can be found under files with the name of `masc.scorer` More info on how the scorer was produced: https://deepspeech.readthedocs.io/en/master/Scorer.html
{}
TRoboto/masc_kenlm_3grams_lm
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Trump Tweets DialoGPT Model
{"tags": ["conversational"]}
TTYU/DialoGPT-small-trump
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Iroh DialoGPT Model
{"tags": ["conversational"]}
TVLG/DialoGPT-small-Iroh-Bot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
TWP/Clone-Jun
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
hello hello hello hello
{}
TaahaKazi/bert-joke-identifier
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
TaahaKazi/joke-generator
null
[ "pytorch", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
hello hello
{}
TaahaKazi/joke-identifier-1
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
hello
{}
TaahaKazi/joke-identifier-2
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
hello
{}
TaahaKazi/joke-identifier-3
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
TaahaKazi/joke-identifier-4
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
TaahaKazi/joke-identifier-5
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
TaahaKazi/joke-identifier-6
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
hello
{}
TaahaKazi/joke-identifier-bert
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Taarieq/bert-base-uncased-finetuned-swag
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
TacticalSkado/Skadi
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{}
Taekyoon/dpr_context
null
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{}
Taekyoon/dpr_question
null
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # neg_komrc_train This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4016 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 1234 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.277 | 0.51 | 10000 | 0.4016 | | 0.1671 | 1.03 | 20000 | 0.4116 | | 0.1725 | 1.54 | 30000 | 0.4390 | | 0.0868 | 2.06 | 40000 | 0.5147 | | 0.0868 | 2.57 | 50000 | 0.5064 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "model-index": [{"name": "neg_komrc_train", "results": []}]}
Taekyoon/neg_komrc_train
null
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{}
Taekyoon/test_bert_model
null
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00