Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
feature-extraction
transformers
# IndoConvBERT Base Model IndoConvBERT is a ConvBERT model pretrained on Indo4B. ## Pretraining details We follow a different training procedure: instead of using a two-phase approach, that pre-trains the model for 90% with 128 sequence length and 10% with 512 sequence length, we pre-train the model with 512 sequence length for 1M steps on a v3-8 TPU. The current version of the model is trained on Indo4B and small Twitter dump. ## Acknowledgement Big thanks to TFRC (TensorFlow Research Cloud) for providing free TPU.
{"language": "id", "inference": false}
Wikidepia/IndoConvBERT-base
null
[ "transformers", "pytorch", "tf", "convbert", "feature-extraction", "id", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# Paraphrase Generation with IndoT5 Base IndoT5-base trained on translated PAWS. ## Model in action ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("Wikidepia/IndoT5-base-paraphrase") model = AutoModelForSeq2SeqLM.from_pretrained("Wikidepia/IndoT5-base-paraphrase") sentence = "Anak anak melakukan piket kelas agar kebersihan kelas terjaga" text = "paraphrase: " + sentence + " </s>" encoding = tokenizer(text, padding='longest', return_tensors="pt") outputs = model.generate( input_ids=encoding["input_ids"], attention_mask=encoding["attention_mask"], max_length=512, do_sample=True, top_k=200, top_p=0.95, early_stopping=True, num_return_sequences=5 ) ``` ## Limitations Sometimes paraphrase contain date which doesnt exists in the original text :/ ## Acknowledgement Thanks to Tensorflow Research Cloud for providing TPU v3-8s.
{"language": ["id"]}
Wikidepia/IndoT5-base-paraphrase
null
[ "transformers", "pytorch", "jax", "tensorboard", "t5", "text2text-generation", "id", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# Indonesian T5 Base T5 (Text-to-Text Transfer Transformer) model pretrained on Indonesian mC4 with [extra filtering](https://github.com/Wikidepia/indonesian_datasets/tree/master/dump/mc4). This model is pre-trained only and needs to be fine-tuned to be used for specific tasks. ## Pretraining Details Trained for 1M steps following [`google/t5-v1_1-base`](https://huggingface.co/google/t5-v1_1-base). ## Model Performance TBD ## Limitations and bias This model also has the problem of biased (unethical, harmful, biased) output results due to the bias of the content of the training data, which is associated with the language model using a large-scale corpus. There is potential. Assuming that this problem may occur, please be careful to use it only for applications that do not cause damage. ## Acknowledgement Thanks to Tensorflow Research Cloud for providing TPU v3-8s.
{"language": ["id"], "datasets": ["allenai/c4"]}
Wikidepia/IndoT5-base
null
[ "transformers", "pytorch", "t5", "text2text-generation", "id", "dataset:allenai/c4", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
**NOTE** : This model might be broken :/ # Indonesian T5 Large T5 (Text-to-Text Transfer Transformer) model pretrained on Indonesian mC4 with [extra filtering](https://github.com/Wikidepia/indonesian_datasets/tree/master/dump/mc4). This model is pre-trained only and needs to be fine-tuned to be used for specific tasks. ## Pretraining Details Trained for 500K steps following [`google/t5-v1_1-large`](https://huggingface.co/google/t5-v1_1-large). ## Model Performance TBD ## Limitations and bias This model also has the problem of biased (unethical, harmful, biased) output results due to the bias of the content of the training data, which is associated with the language model using a large-scale corpus. There is potential. Assuming that this problem may occur, please be careful to use it only for applications that do not cause damage. ## Acknowledgement Thanks to Tensorflow Research Cloud for providing TPU v3-8s.
{"language": ["id"], "datasets": ["allenai/c4"]}
Wikidepia/IndoT5-large
null
[ "transformers", "pytorch", "t5", "text2text-generation", "id", "dataset:allenai/c4", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# Indonesian T5 Small T5 (Text-to-Text Transfer Transformer) model pretrained on Indonesian mC4 with [extra filtering](https://github.com/Wikidepia/indonesian_datasets/tree/master/dump/mc4). This model is pre-trained only and needs to be fine-tuned to be used for specific tasks. ## Pretraining Details Trained for 1M steps following [`google/t5-v1_1-small`](https://huggingface.co/google/t5-v1_1-small). ## Model Performance TBD ## Limitations and bias This model also has the problem of biased (unethical, harmful, biased) output results due to the bias of the content of the training data, which is associated with the language model using a large-scale corpus. There is potential. Assuming that this problem may occur, please be careful to use it only for applications that do not cause damage. ## Acknowledgement Thanks to Tensorflow Research Cloud for providing TPU v3-8s.
{"language": ["id"], "datasets": ["allenai/c4"]}
Wikidepia/IndoT5-small
null
[ "transformers", "pytorch", "t5", "text2text-generation", "id", "dataset:allenai/c4", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
flair
# SponsorBlock Auto Segment
{"language": "en", "tags": ["flair", "token-classification", "sequence-tagger-model"]}
Wikidepia/SB-AutoSegment
null
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
# SQuAD IndoBERT-Lite Base Model Fine-tuned IndoBERT-Lite from IndoBenchmark using Translated SQuAD datasets. ## How to use ### Using pipeline ```python from transformers import BertTokenizerFast, pipeline tokenizer = BertTokenizerFast.from_pretrained( 'Wikidepia/albert-bahasa-uncased-squad' ) nlp = pipeline('question-answering', model="Wikidepia/albert-bahasa-uncased-squad", tokenizer=tokenizer) QA_input = { 'question': 'Kapan orang Normandia berada di Normandia?', 'context': 'The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) adalah orang-orang yang pada abad ke-10 dan ke-11 memberikan nama mereka ke Normandia, sebuah wilayah di Prancis. Mereka adalah keturunan dari Norse (\ "Norman \" berasal dari \ "Norseman \") perampok dan perompak dari Denmark, Islandia dan Norwegia yang, di bawah pemimpin mereka Rollo, setuju untuk bersumpah setia kepada Raja Charles III dari Francia Barat. Melalui generasi asimilasi dan pencampuran dengan penduduk asli Franka dan Romawi-Gaul, keturunan mereka secara bertahap akan bergabung dengan budaya Francia Barat yang berbasis di Karoling. Identitas budaya dan etnis orang Normandia yang berbeda awalnya muncul pada paruh pertama abad ke-10, dan terus berkembang selama abad-abad berikutnya.' } res = nlp(QA_input) print(res) ```
{"language": "id", "inference": false}
Wikidepia/albert-bahasa-uncased-squad
null
[ "transformers", "pytorch", "albert", "question-answering", "id", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
# IndoBERT-Lite base fine-tuned on Translated SQuAD v2 [IndoBERT-Lite](https://huggingface.co/indobenchmark/indobert-lite-base-p2) trained by [Indo Benchmark](https://www.indobenchmark.com/) and fine-tuned on [Translated SQuAD 2.0](https://github.com/Wikidepia/indonesia_dataset/tree/master/question-answering/SQuAD) for **Q&A** downstream task. ## Model in action Fast usage with **pipelines**: ```python from transformers import BertTokenizerFast, pipeline tokenizer = BertTokenizerFast.from_pretrained( 'Wikidepia/indobert-lite-squad' ) qa_pipeline = pipeline( "question-answering", model="Wikidepia/indobert-lite-squad", tokenizer=tokenizer ) qa_pipeline({ 'context': "Setelah menghabiskan waktu satu tahun di Praha, Einstein tinggal di Swiss antara tahun 1895 dan 1914, melepas kewarganegaraan Jermannya pada tahun 1896, dan lulus sarjana dari sekolah politeknik federal Swiss (kelak Eidgenössische Technische Hochschule, ETH) di Zürich pada tahun 1900.", 'question': "Kapan Einstein melepas kewarganegaraan Jerman?" }) ``` # Output: ```json { "score":0.9799205660820007, "start":147, "end":151, "answer":"1896" } ``` README copied from [mrm8488's repository](https://huggingface.co/mrm8488/bert-tiny-finetuned-squadv2)
{"language": "id", "widget": [{"text": "Kapan Einstein melepas kewarganegaraan Jerman?", "context": "Setelah menghabiskan waktu satu tahun di Praha, Einstein tinggal di Swiss antara tahun 1895 dan 1914, melepas kewarganegaraan Jermannya pada tahun 1896, dan lulus sarjana dari sekolah politeknik federal Swiss (kelak Eidgen\u00f6ssische Technische Hochschule, ETH) di Z\u00fcrich pada tahun 1900."}]}
Wikidepia/indobert-lite-squad
null
[ "transformers", "pytorch", "albert", "question-answering", "id", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
# IndoBERT-Lite-SQuAD base fine-tuned on Full Translated SQuAD v2 [IndoBERT-Lite](https://huggingface.co/indobenchmark/indobert-lite-base-p2) trained by [Indo Benchmark](https://www.indobenchmark.com/) and fine-tuned on [Translated SQuAD 2.0](https://github.com/Wikidepia/indonesia_dataset/tree/master/question-answering/squad) for **Q&A** downstream task. ## Model in action Fast usage with **pipelines**: ```python from transformers import BertTokenizerFast, pipeline tokenizer = BertTokenizerFast.from_pretrained( 'Wikidepia/indobert-lite-squad' ) qa_pipeline = pipeline( "question-answering", model="Wikidepia/indobert-lite-squad", tokenizer=tokenizer ) qa_pipeline({ 'context': "Setelah menghabiskan waktu satu tahun di Praha, Einstein tinggal di Swiss antara tahun 1895 dan 1914, melepas kewarganegaraan Jermannya pada tahun 1896, dan lulus sarjana dari sekolah politeknik federal Swiss (kelak Eidgenössische Technische Hochschule, ETH) di Zürich pada tahun 1900.", 'question': "Kapan Einstein melepas kewarganegaraan Jerman?" }) ``` # Output: ```json { "score": 0.9169162511825562, "start": 147, "end": 151, "answer": "1896" } ``` README copied from [mrm8488's repository](https://huggingface.co/mrm8488/bert-tiny-finetuned-squadv2)
{"language": "id", "widget": [{"text": "Kapan Einstein melepas kewarganegaraan Jerman?", "context": "Setelah menghabiskan waktu satu tahun di Praha, Einstein tinggal di Swiss antara tahun 1895 dan 1914, melepas kewarganegaraan Jermannya pada tahun 1896, dan lulus sarjana dari sekolah politeknik federal Swiss (kelak Eidgen\u00f6ssische Technische Hochschule, ETH) di Z\u00fcrich pada tahun 1900."}]}
Wikidepia/indobert-lite-squadx
null
[ "transformers", "pytorch", "albert", "question-answering", "id", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
{}
Wikidepia/indonesian-punctuation
null
[ "transformers", "pytorch", "albert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# NMT Model for English-Indonesian
{}
Wikidepia/marian-nmt-enid
null
[ "transformers", "pytorch", "marian", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Wikidepia/quartznet-indonesian
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
{}
Wikidepia/w2v2-id-tmp
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
# Wav2Vec2 XLS-R-300M - Indonesian This model is a fine-tuned version of `facebook/wav2vec2-xls-r-300m` on the `mozilla-foundation/common_voice_8_0` and [MagicHub Indonesian Conversational Speech Corpus](https://magichub.com/datasets/indonesian-conversational-speech-corpus/).
{"language": ["id"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "id", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLS-R-300M - Indonesian", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "id"}, "metrics": [{"type": "wer", "value": 5.046, "name": "Test WER"}, {"type": "cer", "value": 1.699, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "id"}, "metrics": [{"type": "wer", "value": 41.31, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "id"}, "metrics": [{"type": "wer", "value": 52.23, "name": "Test WER"}]}]}]}
Wikidepia/wav2vec2-xls-r-300m-indonesian
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "hf-asr-leaderboard", "id", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
WikinewsSum/bart-large-cnn-multi-en-wiki-news
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
WikinewsSum/bart-large-multi-combine-wiki-news
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
WikinewsSum/bart-large-multi-de-wiki-news
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
WikinewsSum/bart-large-multi-en-wiki-news
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
WikinewsSum/bart-large-multi-fr-wiki-news
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
WikinewsSum/bert2bert-multi-de-wiki-news
null
[ "transformers", "pytorch", "encoder-decoder", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
WikinewsSum/bert2bert-multi-en-wiki-news
null
[ "transformers", "pytorch", "encoder-decoder", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
WikinewsSum/bert2bert-multi-fr-wiki-news
null
[ "transformers", "pytorch", "encoder-decoder", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
WikinewsSum/t5-base-multi-combine-wiki-news
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
WikinewsSum/t5-base-multi-de-wiki-news
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
WikinewsSum/t5-base-multi-en-wiki-news
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
WikinewsSum/t5-base-multi-fr-wiki-news
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
WikinewsSum/t5-base-with-title-multi-de-wiki-news
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
WikinewsSum/t5-base-with-title-multi-en-wiki-news
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
WikinewsSum/t5-base-with-title-multi-fr-wiki-news
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Williwaw/prac
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
Wilson2021/bert_cn_finetuning_model01
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
Wilson2021/mymodel1007
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
image-classification
transformers
Google didn't publish vit-tiny and vit-small model checkpoints in Hugging Face. I converted the weights from the [timm repository](https://github.com/rwightman/pytorch-image-models). This model is used in the same way as [ViT-base](https://huggingface.co/google/vit-base-patch16-224). Note that [safetensors] model requires torch 2.0 environment.
{"license": "apache-2.0", "tags": ["vision", "image-classification"], "datasets": ["imagenet"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]}
WinKawaks/vit-small-patch16-224
null
[ "transformers", "pytorch", "safetensors", "vit", "image-classification", "vision", "dataset:imagenet", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
image-classification
transformers
Google didn't publish vit-tiny and vit-small model checkpoints in Hugging Face. I converted the weights from the [timm repository](https://github.com/rwightman/pytorch-image-models). This model is used in the same way as [ViT-base](https://huggingface.co/google/vit-base-patch16-224). Note that [safetensors] model requires torch 2.0 environment.
{"license": "apache-2.0", "tags": ["vision", "image-classification"], "datasets": ["imagenet"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]}
WinKawaks/vit-tiny-patch16-224
null
[ "transformers", "pytorch", "safetensors", "vit", "image-classification", "vision", "dataset:imagenet", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
WindJ/hello_world_model
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Wins/gpt_law
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
Wintermute/Wintermute
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
Wintermute/Wintermute_extended
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# JC DialogGPT Model
{"tags": ["conversational"]}
Wise/DialogGPT-small-JC
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
WithYou/model_test
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Wolferella/gpt-neo-2.7B
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
Wonjun/KPTBert
null
[ "transformers", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Woonn/distilbert-base-uncased-finetuned-cola
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Woonn/kcbert-base-finetuned-nsmc
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Woonn/kcbert-base-finetuned-sst2
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Woonn/kogpt2-base-v2-finetuned-sst2
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2162 - Accuracy: 0.9225 - F1: 0.9227 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8437 | 1.0 | 250 | 0.3153 | 0.903 | 0.9005 | | 0.2467 | 2.0 | 500 | 0.2162 | 0.9225 | 0.9227 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cpu - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9225, "name": "Accuracy"}, {"type": "f1", "value": 0.9227046184638882, "name": "F1"}]}]}]}
Worldman/distilbert-base-uncased-finetuned-emotion
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
WoutN2001/DialoGPT-small-joshua
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
WoutN2001/james
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
WoutN2001/james2
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# waaaa
{"tags": ["conversational"]}
WoutN2001/james3
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
WoutN2001/james6
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Wtfray/Rayssa
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Wuhu0/output1
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
WurmWillem/DialoGPT-medium-Rick
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
WurmWillem/DialoGPT-medium-RickandMorty
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
WurmWillem/DialoGPT-medium-RickandMorty2
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
null
{"tags": ["conversational"]}
WurmWillem/DialoGPT-medium-RickandMorty3
null
[ "conversational", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
WurmWillem/Rick
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Wusgnob/t5-small-finetuned-xsum
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Ww0042/testface
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
Wzf/bert_fintuuing
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2-fakenews-discriminator The dataset: Fake and real news dataset https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset I use title and label to train the classifier label_0 : Fake news label_1 : Real news This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0910 - Accuracy: 0.9758 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0452 | 1.0 | 1768 | 0.0910 | 0.9758 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "albert-base-v2-fakenews-discriminator", "results": []}]}
XSY/albert-base-v2-fakenews-discriminator
null
[ "transformers", "pytorch", "albert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2-imdb-calssification label_0: negative label_1: positive This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.1983 - Accuracy: 0.9361 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.26 | 1.0 | 1563 | 0.1983 | 0.9361 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imdb"], "metrics": ["accuracy"], "model-index": [{"name": "albert-base-v2-imdb-calssification", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.93612, "name": "Accuracy"}]}]}]}
XSY/albert-base-v2-imdb-calssification
null
[ "transformers", "pytorch", "albert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2-scarcasm-discriminator This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2379 - Accuracy: 0.8996 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2111 | 1.0 | 2179 | 0.2379 | 0.8996 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.0+cu111 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "albert-base-v2-scarcasm-discriminator", "results": []}]}
XSY/albert-base-v2-scarcasm-discriminator
null
[ "transformers", "pytorch", "albert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-scarcasm-discriminator roberta-base label0: unsarcasitic label1: sarcastic The fine tune method in my github https://github.com/yangyangxusheng/Fine-tune-use-transformers This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1844 - Accuracy: 0.9698 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.144 | 1.0 | 2179 | 0.2522 | 0.9215 | | 0.116 | 2.0 | 4358 | 0.2105 | 0.9530 | | 0.0689 | 3.0 | 6537 | 0.2015 | 0.9610 | | 0.028 | 4.0 | 8716 | 0.1844 | 0.9698 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "roberta-scarcasm-discriminator", "results": []}]}
XSY/roberta-scarcasm-discriminator
null
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
这个模型是根据这个一步一步完成的,如果想自己微调,请参考https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/summarization.ipynb This model is completed step by step according to this, if you want to fine-tune yourself, please refer to https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/summarization.ipynb --- license: apache-2.0 tags: - generated_from_trainer datasets: - xsum metrics: - rouge model-index: - name: t5-small-finetuned-xsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: xsum type: xsum args: default metrics: - name: Rouge1 type: rouge value: 28.6901 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.4500 - Rouge1: 28.6901 - Rouge2: 8.0102 - Rougel: 22.6087 - Rougelsum: 22.6105 - Gen Len: 18.824 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.6799 | 1.0 | 25506 | 2.4500 | 28.6901 | 8.0102 | 22.6087 | 22.6105 | 18.824 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
{}
XSY/t5-small-finetuned-xsum
null
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
XYG/model_name
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 478412765 - CO2 Emissions (in grams): 69.86520391863117 ## Validation Metrics - Loss: 0.186362624168396 - Accuracy: 0.9539955699437723 - Precision: 0.9527454242928453 - Recall: 0.9572049481778669 - AUC: 0.9903929997079495 - F1: 0.9549699799866577 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/XYHY/autonlp-123-478412765 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("XYHY/autonlp-123-478412765", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("XYHY/autonlp-123-478412765", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
{"language": "unk", "tags": "autonlp", "datasets": ["XYHY/autonlp-data-123"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 69.86520391863117}
XYHY/autonlp-123-478412765
null
[ "transformers", "pytorch", "roberta", "text-classification", "autonlp", "unk", "dataset:XYHY/autonlp-data-123", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Xan/Hhhehe
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Xanlete/DialoGPT-small-harrypotter
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
Xenova/sponsorblock-base-v1.1
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
Xenova/sponsorblock-base-v1
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Xenova/sponsorblock-classifier
null
[ "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
Xenova/sponsorblock-small
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Ultron Small
{"tags": ["conversational"]}
Xeouz/Ultron-Small
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Xia/albert
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
XiangPan/roberta_squad1_2epoch
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Xianshu/distilbert-base-uncased-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
XiaoqiJiao/2nd_General_TinyBERT_6L_768D
null
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
XiaoqiJiao/TinyBERT_General_4L_312D
null
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
XiaoqiJiao/TinyBERT_General_6L_768D
null
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
A VQGAN-compatible model trained on screenshots of cityscapes from 90s anime. To use, direct vqgan to the model as you would vqgan_imagenet_f16_1024, faceshq, etc.
{}
Xibanya/AestheticCities
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-image
null
# Sunset Cities This is the [Malevich](https://huggingface.co/sberbank-ai/rudalle-Malevich) ruDALL-E model finetuned on anime screenshots of big cities at sunset. <img style="text-align:center; display:block;" src="https://huggingface.co/Xibanya/sunset_city/resolve/main/citysunset.png" width="256"> ### installation ``` pip install rudalle ``` ### How to use Basic implementation to get a list of image data objects. ```python from translate import Translator from rudalle import get_rudalle_model, get_tokenizer, get_vae from rudalle.pipelines import generate_images model = get_rudalle_model('Malevich', pretrained=True, fp16=True, device='cuda') model.load_state_dict(torch.load(CHECKPOINT_PATH)) vae = get_vae().to('cuda') tokenizer = get_tokenizer() input_text = Translator(to_lang='ru').translate('city at sunset') images, _ = generate_images( text=input_text, tokenizer=tokenizer, dalle=model, vae=vae, images_num=1, top_k=2048, top_p=0.95, temperature=1.0 ) ``` the Malevich model only recognizes input in Russian. If you're going to paste Cyrillic directly into the code rather than filter an English prompt through the translate API, you will need to put this at the top of the file: ```python #!/usr/bin/env python3 # -*- coding: utf-8 -*- ```
{"language": ["ru", "en"], "license": "cc-by-sa-4.0", "tags": ["PyTorch", "Transformers"], "pipeline_tag": "text-to-image"}
Xibanya/sunset_city
null
[ "PyTorch", "Transformers", "text-to-image", "ru", "en", "license:cc-by-sa-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Xillolxlbln/alkx
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Xillolxlbln/wav2vec2-large-xls-r-300m-nl-colab_khaled
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Xonlly/test
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Harry
{"tags": ["conversational"]}
XuguangAi/DialoGPT-small-Harry
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Leslie
{"tags": ["conversational"]}
XuguangAi/DialoGPT-small-Leslie
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Rick
{"tags": ["conversational"]}
XuguangAi/DialoGPT-small-Rick
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# Toxic language detection ## Model description A toxic language detection model trained on tweets. The base model is Roberta-large. For more information, including the **training data**, **limitations and bias**, please refer to the [paper](https://arxiv.org/pdf/2102.00086.pdf) and Github [repo](https://github.com/XuhuiZhou/Toxic_Debias) for more details. #### How to use Note that LABEL_1 means toxic and LABEL_0 means non-toxic in the output. ```python from transformers import pipeline classifier = pipeline("text-classification",model='Xuhui/ToxDect-roberta-large', return_all_scores=True) prediction = classifier("You are f**king stupid!", ) print(prediction) """ Output: [[{'label': 'LABEL_0', 'score': 0.002632011892274022}, {'label': 'LABEL_1', 'score': 0.9973680377006531}]] """ ``` ## Training procedure The random seed for this model is 22. For other details, please refer to the Github [repo](https://github.com/XuhuiZhou/Toxic_Debias) for more details. ### BibTeX entry and citation info ```bibtex @inproceedings{zhou-etal-2020-debiasing, title = {Challenges in Automated Debiasing for Toxic Language Detection}, author = {Zhou, Xuhui and Sap, Maarten and Swayamdipta, Swabha and Choi, Yejin and Smith, Noah A.}, booktitle = {EACL}, abbr = {EACL}, html = {https://www.aclweb.org/anthology/2021.eacl-main.274.pdf}, code = {https://github.com/XuhuiZhou/Toxic_Debias}, year = {2021}, bibtex_show = {true}, selected = {true} } ```
{"language": [], "tags": [], "datasets": [], "metrics": []}
Xuhui/ToxDect-roberta-large
null
[ "transformers", "pytorch", "roberta", "text-classification", "arxiv:2102.00086", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
XxProKillerxX/Meh
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
YESO/yeso
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
YNJI/ELECTRA
null
[ "transformers", "electra", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
YSKartal/berturk-social-5m
null
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
YYF/bert-finetuning-test
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
YYF/test_hello
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# 经典昆曲欣赏 期末作业 ## KunquChat Author: 1900012921 俞跃江
{}
YYJ/KunquChat
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
YacShin/LocationAddressV1
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Yagahu/Bahi
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00